Test Report: KVM_Linux_crio 20512

                    
                      48b5bd1b410deb6f0834786c8abc7687a18ec8ba:2025-04-14:39137
                    
                

Test fail (12/327)

x
+
TestAddons/parallel/Ingress (152.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-885191 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-885191 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-885191 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [48d650a7-bbfb-493e-87b8-da6b06272724] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [48d650a7-bbfb-493e-87b8-da6b06272724] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003719533s
I0414 14:20:25.638441 1853270 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-885191 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.708753077s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-885191 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.123
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-885191 -n addons-885191
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 logs -n 25: (1.527598427s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-174763                                                                     | download-only-174763 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| delete  | -p download-only-370703                                                                     | download-only-370703 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| delete  | -p download-only-174763                                                                     | download-only-174763 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-452877 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | binary-mirror-452877                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42987                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-452877                                                                     | binary-mirror-452877 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| addons  | disable dashboard -p                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | addons-885191                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | addons-885191                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-885191 --wait=true                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:19 UTC | 14 Apr 25 14:19 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:19 UTC | 14 Apr 25 14:19 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:19 UTC | 14 Apr 25 14:19 UTC |
	|         | -p addons-885191                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:19 UTC | 14 Apr 25 14:19 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-885191 ip                                                                            | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-885191 ssh cat                                                                       | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | /opt/local-path-provisioner/pvc-f2e5318b-1d01-41b3-99fd-f0b6fdfd26b4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-885191 addons disable                                                                | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:21 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-885191 ssh curl -s                                                                   | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-885191 addons                                                                        | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:20 UTC | 14 Apr 25 14:20 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-885191 ip                                                                            | addons-885191        | jenkins | v1.35.0 | 14 Apr 25 14:22 UTC | 14 Apr 25 14:22 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:17:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:17:16.107488 1853877 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:17:16.107598 1853877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:16.107606 1853877 out.go:358] Setting ErrFile to fd 2...
	I0414 14:17:16.107612 1853877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:16.107823 1853877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:17:16.108484 1853877 out.go:352] Setting JSON to false
	I0414 14:17:16.109678 1853877 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":35980,"bootTime":1744604256,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:17:16.109793 1853877 start.go:139] virtualization: kvm guest
	I0414 14:17:16.111845 1853877 out.go:177] * [addons-885191] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:17:16.113295 1853877 notify.go:220] Checking for updates...
	I0414 14:17:16.113312 1853877 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:17:16.114720 1853877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:17:16.116199 1853877 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:17:16.117483 1853877 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:17:16.118817 1853877 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:17:16.120234 1853877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:17:16.121907 1853877 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:17:16.156831 1853877 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 14:17:16.158075 1853877 start.go:297] selected driver: kvm2
	I0414 14:17:16.158093 1853877 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:17:16.158111 1853877 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:17:16.159210 1853877 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:16.159317 1853877 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:17:16.177192 1853877 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:17:16.177253 1853877 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:17:16.177521 1853877 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:17:16.177559 1853877 cni.go:84] Creating CNI manager for ""
	I0414 14:17:16.177602 1853877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:17:16.177615 1853877 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:17:16.177661 1853877 start.go:340] cluster config:
	{Name:addons-885191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-885191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:17:16.177770 1853877 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:16.180730 1853877 out.go:177] * Starting "addons-885191" primary control-plane node in "addons-885191" cluster
	I0414 14:17:16.182047 1853877 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:17:16.182113 1853877 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 14:17:16.182125 1853877 cache.go:56] Caching tarball of preloaded images
	I0414 14:17:16.182246 1853877 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 14:17:16.182260 1853877 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 14:17:16.182656 1853877 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/config.json ...
	I0414 14:17:16.182687 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/config.json: {Name:mkcc96159b8e8289f2a86d0ff223b1452bd7db92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:16.182884 1853877 start.go:360] acquireMachinesLock for addons-885191: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 14:17:16.182953 1853877 start.go:364] duration metric: took 49.786µs to acquireMachinesLock for "addons-885191"
	I0414 14:17:16.182977 1853877 start.go:93] Provisioning new machine with config: &{Name:addons-885191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-885191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:17:16.183071 1853877 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 14:17:16.185798 1853877 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0414 14:17:16.185998 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:17:16.186062 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:17:16.202320 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0414 14:17:16.202860 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:17:16.203362 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:17:16.203382 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:17:16.203828 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:17:16.204052 1853877 main.go:141] libmachine: (addons-885191) Calling .GetMachineName
	I0414 14:17:16.204247 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:16.204417 1853877 start.go:159] libmachine.API.Create for "addons-885191" (driver="kvm2")
	I0414 14:17:16.204459 1853877 client.go:168] LocalClient.Create starting
	I0414 14:17:16.204518 1853877 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 14:17:16.464726 1853877 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 14:17:16.697408 1853877 main.go:141] libmachine: Running pre-create checks...
	I0414 14:17:16.697440 1853877 main.go:141] libmachine: (addons-885191) Calling .PreCreateCheck
	I0414 14:17:16.698154 1853877 main.go:141] libmachine: (addons-885191) Calling .GetConfigRaw
	I0414 14:17:16.698702 1853877 main.go:141] libmachine: Creating machine...
	I0414 14:17:16.698721 1853877 main.go:141] libmachine: (addons-885191) Calling .Create
	I0414 14:17:16.698965 1853877 main.go:141] libmachine: (addons-885191) creating KVM machine...
	I0414 14:17:16.698990 1853877 main.go:141] libmachine: (addons-885191) creating network...
	I0414 14:17:16.700430 1853877 main.go:141] libmachine: (addons-885191) DBG | found existing default KVM network
	I0414 14:17:16.701179 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:16.700999 1853899 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002045d0}
	I0414 14:17:16.701208 1853877 main.go:141] libmachine: (addons-885191) DBG | created network xml: 
	I0414 14:17:16.701224 1853877 main.go:141] libmachine: (addons-885191) DBG | <network>
	I0414 14:17:16.701237 1853877 main.go:141] libmachine: (addons-885191) DBG |   <name>mk-addons-885191</name>
	I0414 14:17:16.701246 1853877 main.go:141] libmachine: (addons-885191) DBG |   <dns enable='no'/>
	I0414 14:17:16.701256 1853877 main.go:141] libmachine: (addons-885191) DBG |   
	I0414 14:17:16.701268 1853877 main.go:141] libmachine: (addons-885191) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 14:17:16.701279 1853877 main.go:141] libmachine: (addons-885191) DBG |     <dhcp>
	I0414 14:17:16.701328 1853877 main.go:141] libmachine: (addons-885191) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 14:17:16.701349 1853877 main.go:141] libmachine: (addons-885191) DBG |     </dhcp>
	I0414 14:17:16.701356 1853877 main.go:141] libmachine: (addons-885191) DBG |   </ip>
	I0414 14:17:16.701377 1853877 main.go:141] libmachine: (addons-885191) DBG |   
	I0414 14:17:16.701385 1853877 main.go:141] libmachine: (addons-885191) DBG | </network>
	I0414 14:17:16.701401 1853877 main.go:141] libmachine: (addons-885191) DBG | 
	I0414 14:17:16.707461 1853877 main.go:141] libmachine: (addons-885191) DBG | trying to create private KVM network mk-addons-885191 192.168.39.0/24...
	I0414 14:17:16.784771 1853877 main.go:141] libmachine: (addons-885191) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191 ...
	I0414 14:17:16.784816 1853877 main.go:141] libmachine: (addons-885191) DBG | private KVM network mk-addons-885191 192.168.39.0/24 created
	I0414 14:17:16.784830 1853877 main.go:141] libmachine: (addons-885191) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:17:16.784842 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:16.784655 1853899 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:17:16.784888 1853877 main.go:141] libmachine: (addons-885191) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 14:17:17.097028 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:17.096878 1853899 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa...
	I0414 14:17:17.320975 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:17.320785 1853899 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/addons-885191.rawdisk...
	I0414 14:17:17.321011 1853877 main.go:141] libmachine: (addons-885191) DBG | Writing magic tar header
	I0414 14:17:17.321024 1853877 main.go:141] libmachine: (addons-885191) DBG | Writing SSH key tar header
	I0414 14:17:17.321035 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:17.320948 1853899 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191 ...
	I0414 14:17:17.321053 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191
	I0414 14:17:17.321122 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191 (perms=drwx------)
	I0414 14:17:17.321149 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 14:17:17.321156 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 14:17:17.321181 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:17:17.321212 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 14:17:17.321228 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 14:17:17.321238 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 14:17:17.321246 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 14:17:17.321251 1853877 main.go:141] libmachine: (addons-885191) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 14:17:17.321259 1853877 main.go:141] libmachine: (addons-885191) creating domain...
	I0414 14:17:17.321293 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 14:17:17.321312 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home/jenkins
	I0414 14:17:17.321345 1853877 main.go:141] libmachine: (addons-885191) DBG | checking permissions on dir: /home
	I0414 14:17:17.321363 1853877 main.go:141] libmachine: (addons-885191) DBG | skipping /home - not owner
	I0414 14:17:17.322553 1853877 main.go:141] libmachine: (addons-885191) define libvirt domain using xml: 
	I0414 14:17:17.322566 1853877 main.go:141] libmachine: (addons-885191) <domain type='kvm'>
	I0414 14:17:17.322572 1853877 main.go:141] libmachine: (addons-885191)   <name>addons-885191</name>
	I0414 14:17:17.322577 1853877 main.go:141] libmachine: (addons-885191)   <memory unit='MiB'>4000</memory>
	I0414 14:17:17.322582 1853877 main.go:141] libmachine: (addons-885191)   <vcpu>2</vcpu>
	I0414 14:17:17.322587 1853877 main.go:141] libmachine: (addons-885191)   <features>
	I0414 14:17:17.322594 1853877 main.go:141] libmachine: (addons-885191)     <acpi/>
	I0414 14:17:17.322600 1853877 main.go:141] libmachine: (addons-885191)     <apic/>
	I0414 14:17:17.322607 1853877 main.go:141] libmachine: (addons-885191)     <pae/>
	I0414 14:17:17.322633 1853877 main.go:141] libmachine: (addons-885191)     
	I0414 14:17:17.322643 1853877 main.go:141] libmachine: (addons-885191)   </features>
	I0414 14:17:17.322658 1853877 main.go:141] libmachine: (addons-885191)   <cpu mode='host-passthrough'>
	I0414 14:17:17.322682 1853877 main.go:141] libmachine: (addons-885191)   
	I0414 14:17:17.322693 1853877 main.go:141] libmachine: (addons-885191)   </cpu>
	I0414 14:17:17.322707 1853877 main.go:141] libmachine: (addons-885191)   <os>
	I0414 14:17:17.322716 1853877 main.go:141] libmachine: (addons-885191)     <type>hvm</type>
	I0414 14:17:17.322724 1853877 main.go:141] libmachine: (addons-885191)     <boot dev='cdrom'/>
	I0414 14:17:17.322735 1853877 main.go:141] libmachine: (addons-885191)     <boot dev='hd'/>
	I0414 14:17:17.322745 1853877 main.go:141] libmachine: (addons-885191)     <bootmenu enable='no'/>
	I0414 14:17:17.322755 1853877 main.go:141] libmachine: (addons-885191)   </os>
	I0414 14:17:17.322764 1853877 main.go:141] libmachine: (addons-885191)   <devices>
	I0414 14:17:17.322772 1853877 main.go:141] libmachine: (addons-885191)     <disk type='file' device='cdrom'>
	I0414 14:17:17.322784 1853877 main.go:141] libmachine: (addons-885191)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/boot2docker.iso'/>
	I0414 14:17:17.322812 1853877 main.go:141] libmachine: (addons-885191)       <target dev='hdc' bus='scsi'/>
	I0414 14:17:17.322823 1853877 main.go:141] libmachine: (addons-885191)       <readonly/>
	I0414 14:17:17.322830 1853877 main.go:141] libmachine: (addons-885191)     </disk>
	I0414 14:17:17.322838 1853877 main.go:141] libmachine: (addons-885191)     <disk type='file' device='disk'>
	I0414 14:17:17.322858 1853877 main.go:141] libmachine: (addons-885191)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 14:17:17.322874 1853877 main.go:141] libmachine: (addons-885191)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/addons-885191.rawdisk'/>
	I0414 14:17:17.322884 1853877 main.go:141] libmachine: (addons-885191)       <target dev='hda' bus='virtio'/>
	I0414 14:17:17.322895 1853877 main.go:141] libmachine: (addons-885191)     </disk>
	I0414 14:17:17.322912 1853877 main.go:141] libmachine: (addons-885191)     <interface type='network'>
	I0414 14:17:17.322942 1853877 main.go:141] libmachine: (addons-885191)       <source network='mk-addons-885191'/>
	I0414 14:17:17.322960 1853877 main.go:141] libmachine: (addons-885191)       <model type='virtio'/>
	I0414 14:17:17.322972 1853877 main.go:141] libmachine: (addons-885191)     </interface>
	I0414 14:17:17.322986 1853877 main.go:141] libmachine: (addons-885191)     <interface type='network'>
	I0414 14:17:17.322998 1853877 main.go:141] libmachine: (addons-885191)       <source network='default'/>
	I0414 14:17:17.323008 1853877 main.go:141] libmachine: (addons-885191)       <model type='virtio'/>
	I0414 14:17:17.323018 1853877 main.go:141] libmachine: (addons-885191)     </interface>
	I0414 14:17:17.323024 1853877 main.go:141] libmachine: (addons-885191)     <serial type='pty'>
	I0414 14:17:17.323029 1853877 main.go:141] libmachine: (addons-885191)       <target port='0'/>
	I0414 14:17:17.323034 1853877 main.go:141] libmachine: (addons-885191)     </serial>
	I0414 14:17:17.323047 1853877 main.go:141] libmachine: (addons-885191)     <console type='pty'>
	I0414 14:17:17.323059 1853877 main.go:141] libmachine: (addons-885191)       <target type='serial' port='0'/>
	I0414 14:17:17.323073 1853877 main.go:141] libmachine: (addons-885191)     </console>
	I0414 14:17:17.323085 1853877 main.go:141] libmachine: (addons-885191)     <rng model='virtio'>
	I0414 14:17:17.323095 1853877 main.go:141] libmachine: (addons-885191)       <backend model='random'>/dev/random</backend>
	I0414 14:17:17.323106 1853877 main.go:141] libmachine: (addons-885191)     </rng>
	I0414 14:17:17.323111 1853877 main.go:141] libmachine: (addons-885191)     
	I0414 14:17:17.323116 1853877 main.go:141] libmachine: (addons-885191)     
	I0414 14:17:17.323122 1853877 main.go:141] libmachine: (addons-885191)   </devices>
	I0414 14:17:17.323134 1853877 main.go:141] libmachine: (addons-885191) </domain>
	I0414 14:17:17.323143 1853877 main.go:141] libmachine: (addons-885191) 
	I0414 14:17:17.328364 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:10:ad:19 in network default
	I0414 14:17:17.328949 1853877 main.go:141] libmachine: (addons-885191) starting domain...
	I0414 14:17:17.328976 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:17.328985 1853877 main.go:141] libmachine: (addons-885191) ensuring networks are active...
	I0414 14:17:17.329799 1853877 main.go:141] libmachine: (addons-885191) Ensuring network default is active
	I0414 14:17:17.330173 1853877 main.go:141] libmachine: (addons-885191) Ensuring network mk-addons-885191 is active
	I0414 14:17:17.330671 1853877 main.go:141] libmachine: (addons-885191) getting domain XML...
	I0414 14:17:17.331333 1853877 main.go:141] libmachine: (addons-885191) creating domain...
	I0414 14:17:17.692012 1853877 main.go:141] libmachine: (addons-885191) waiting for IP...
	I0414 14:17:17.692975 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:17.693427 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:17.693538 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:17.693447 1853899 retry.go:31] will retry after 296.155354ms: waiting for domain to come up
	I0414 14:17:17.991080 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:17.991623 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:17.991650 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:17.991574 1853899 retry.go:31] will retry after 300.32817ms: waiting for domain to come up
	I0414 14:17:18.293115 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:18.293551 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:18.293590 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:18.293509 1853899 retry.go:31] will retry after 331.084725ms: waiting for domain to come up
	I0414 14:17:18.625935 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:18.626453 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:18.626478 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:18.626412 1853899 retry.go:31] will retry after 496.940707ms: waiting for domain to come up
	I0414 14:17:19.125236 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:19.125742 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:19.125768 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:19.125713 1853899 retry.go:31] will retry after 621.792213ms: waiting for domain to come up
	I0414 14:17:19.749758 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:19.750154 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:19.750177 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:19.750118 1853899 retry.go:31] will retry after 639.39958ms: waiting for domain to come up
	I0414 14:17:20.391099 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:20.391661 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:20.391776 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:20.391562 1853899 retry.go:31] will retry after 948.810258ms: waiting for domain to come up
	I0414 14:17:21.341633 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:21.342141 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:21.342188 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:21.342110 1853899 retry.go:31] will retry after 1.068852021s: waiting for domain to come up
	I0414 14:17:22.412544 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:22.412998 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:22.413041 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:22.412979 1853899 retry.go:31] will retry after 1.38006324s: waiting for domain to come up
	I0414 14:17:23.794356 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:23.794820 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:23.794858 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:23.794768 1853899 retry.go:31] will retry after 1.457581647s: waiting for domain to come up
	I0414 14:17:25.254445 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:25.254967 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:25.254992 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:25.254919 1853899 retry.go:31] will retry after 2.484033967s: waiting for domain to come up
	I0414 14:17:27.741214 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:27.741632 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:27.741674 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:27.741604 1853899 retry.go:31] will retry after 3.127555408s: waiting for domain to come up
	I0414 14:17:30.871547 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:30.871984 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:30.872014 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:30.871966 1853899 retry.go:31] will retry after 3.99745201s: waiting for domain to come up
	I0414 14:17:34.873929 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:34.874324 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find current IP address of domain addons-885191 in network mk-addons-885191
	I0414 14:17:34.874350 1853877 main.go:141] libmachine: (addons-885191) DBG | I0414 14:17:34.874294 1853899 retry.go:31] will retry after 4.669914331s: waiting for domain to come up
	I0414 14:17:39.547853 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:39.548368 1853877 main.go:141] libmachine: (addons-885191) found domain IP: 192.168.39.123
	I0414 14:17:39.548391 1853877 main.go:141] libmachine: (addons-885191) reserving static IP address...
	I0414 14:17:39.548404 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has current primary IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:39.548779 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find host DHCP lease matching {name: "addons-885191", mac: "52:54:00:2a:91:fa", ip: "192.168.39.123"} in network mk-addons-885191
	I0414 14:17:39.631362 1853877 main.go:141] libmachine: (addons-885191) reserved static IP address 192.168.39.123 for domain addons-885191
	I0414 14:17:39.631393 1853877 main.go:141] libmachine: (addons-885191) waiting for SSH...
	I0414 14:17:39.631402 1853877 main.go:141] libmachine: (addons-885191) DBG | Getting to WaitForSSH function...
	I0414 14:17:39.633866 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:39.634198 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191
	I0414 14:17:39.634226 1853877 main.go:141] libmachine: (addons-885191) DBG | unable to find defined IP address of network mk-addons-885191 interface with MAC address 52:54:00:2a:91:fa
	I0414 14:17:39.634431 1853877 main.go:141] libmachine: (addons-885191) DBG | Using SSH client type: external
	I0414 14:17:39.634455 1853877 main.go:141] libmachine: (addons-885191) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa (-rw-------)
	I0414 14:17:39.634502 1853877 main.go:141] libmachine: (addons-885191) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:17:39.634524 1853877 main.go:141] libmachine: (addons-885191) DBG | About to run SSH command:
	I0414 14:17:39.634547 1853877 main.go:141] libmachine: (addons-885191) DBG | exit 0
	I0414 14:17:39.638660 1853877 main.go:141] libmachine: (addons-885191) DBG | SSH cmd err, output: exit status 255: 
	I0414 14:17:39.638682 1853877 main.go:141] libmachine: (addons-885191) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0414 14:17:39.638689 1853877 main.go:141] libmachine: (addons-885191) DBG | command : exit 0
	I0414 14:17:39.638719 1853877 main.go:141] libmachine: (addons-885191) DBG | err     : exit status 255
	I0414 14:17:39.638743 1853877 main.go:141] libmachine: (addons-885191) DBG | output  : 
	I0414 14:17:42.639459 1853877 main.go:141] libmachine: (addons-885191) DBG | Getting to WaitForSSH function...
	I0414 14:17:42.642022 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.642357 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:42.642406 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.642613 1853877 main.go:141] libmachine: (addons-885191) DBG | Using SSH client type: external
	I0414 14:17:42.642658 1853877 main.go:141] libmachine: (addons-885191) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa (-rw-------)
	I0414 14:17:42.642692 1853877 main.go:141] libmachine: (addons-885191) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 14:17:42.642708 1853877 main.go:141] libmachine: (addons-885191) DBG | About to run SSH command:
	I0414 14:17:42.642719 1853877 main.go:141] libmachine: (addons-885191) DBG | exit 0
	I0414 14:17:42.770800 1853877 main.go:141] libmachine: (addons-885191) DBG | SSH cmd err, output: <nil>: 
	I0414 14:17:42.771139 1853877 main.go:141] libmachine: (addons-885191) KVM machine creation complete
	I0414 14:17:42.771466 1853877 main.go:141] libmachine: (addons-885191) Calling .GetConfigRaw
	I0414 14:17:42.772083 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:42.772279 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:42.772436 1853877 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 14:17:42.772452 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:17:42.773686 1853877 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 14:17:42.773701 1853877 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 14:17:42.773706 1853877 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 14:17:42.773712 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:42.775879 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.776177 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:42.776222 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.776426 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:42.776645 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:42.776817 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:42.776945 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:42.777091 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:42.777369 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:42.777381 1853877 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 14:17:42.886048 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:17:42.886079 1853877 main.go:141] libmachine: Detecting the provisioner...
	I0414 14:17:42.886087 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:42.889067 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.889332 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:42.889357 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:42.889515 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:42.889734 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:42.889914 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:42.890108 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:42.890414 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:42.890724 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:42.890742 1853877 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 14:17:42.999373 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 14:17:42.999488 1853877 main.go:141] libmachine: found compatible host: buildroot
	I0414 14:17:42.999499 1853877 main.go:141] libmachine: Provisioning with buildroot...
	I0414 14:17:42.999507 1853877 main.go:141] libmachine: (addons-885191) Calling .GetMachineName
	I0414 14:17:42.999778 1853877 buildroot.go:166] provisioning hostname "addons-885191"
	I0414 14:17:42.999811 1853877 main.go:141] libmachine: (addons-885191) Calling .GetMachineName
	I0414 14:17:43.000013 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:43.002664 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.002993 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.003021 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.003193 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:43.003392 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.003543 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.003701 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:43.003861 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:43.004073 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:43.004085 1853877 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885191 && echo "addons-885191" | sudo tee /etc/hostname
	I0414 14:17:43.129957 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885191
	
	I0414 14:17:43.129999 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:43.133527 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.133993 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.134024 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.134232 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:43.134458 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.134633 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.134794 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:43.134963 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:43.135165 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:43.135184 1853877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885191/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 14:17:43.251667 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 14:17:43.251729 1853877 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 14:17:43.251755 1853877 buildroot.go:174] setting up certificates
	I0414 14:17:43.251781 1853877 provision.go:84] configureAuth start
	I0414 14:17:43.251796 1853877 main.go:141] libmachine: (addons-885191) Calling .GetMachineName
	I0414 14:17:43.252166 1853877 main.go:141] libmachine: (addons-885191) Calling .GetIP
	I0414 14:17:43.255101 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.255391 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.255418 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.255630 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:43.257980 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.258492 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.258527 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.258671 1853877 provision.go:143] copyHostCerts
	I0414 14:17:43.258760 1853877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 14:17:43.258915 1853877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 14:17:43.258990 1853877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 14:17:43.259057 1853877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.addons-885191 san=[127.0.0.1 192.168.39.123 addons-885191 localhost minikube]
	I0414 14:17:43.583855 1853877 provision.go:177] copyRemoteCerts
	I0414 14:17:43.583932 1853877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 14:17:43.583960 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:43.587017 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.587369 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.587396 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.587546 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:43.587767 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.587912 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:43.588035 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:17:43.673273 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 14:17:43.699570 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 14:17:43.725482 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 14:17:43.752050 1853877 provision.go:87] duration metric: took 500.249521ms to configureAuth
	I0414 14:17:43.752086 1853877 buildroot.go:189] setting minikube options for container-runtime
	I0414 14:17:43.752300 1853877 config.go:182] Loaded profile config "addons-885191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:17:43.752411 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:43.755277 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.755683 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.755713 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.755933 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:43.756176 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.756350 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:43.756498 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:43.756718 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:43.756928 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:43.756944 1853877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 14:17:43.993493 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 14:17:43.993546 1853877 main.go:141] libmachine: Checking connection to Docker...
	I0414 14:17:43.993561 1853877 main.go:141] libmachine: (addons-885191) Calling .GetURL
	I0414 14:17:43.995135 1853877 main.go:141] libmachine: (addons-885191) DBG | using libvirt version 6000000
	I0414 14:17:43.997898 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.998286 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:43.998304 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:43.998621 1853877 main.go:141] libmachine: Docker is up and running!
	I0414 14:17:43.998646 1853877 main.go:141] libmachine: Reticulating splines...
	I0414 14:17:43.998655 1853877 client.go:171] duration metric: took 27.794182483s to LocalClient.Create
	I0414 14:17:43.998675 1853877 start.go:167] duration metric: took 27.794261192s to libmachine.API.Create "addons-885191"
	I0414 14:17:43.998685 1853877 start.go:293] postStartSetup for "addons-885191" (driver="kvm2")
	I0414 14:17:43.998694 1853877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 14:17:43.998714 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:43.998988 1853877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 14:17:43.999018 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:44.001437 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.001799 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.001834 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.002008 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:44.002237 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:44.002441 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:44.002612 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:17:44.089411 1853877 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 14:17:44.094062 1853877 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 14:17:44.094097 1853877 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 14:17:44.094183 1853877 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 14:17:44.094223 1853877 start.go:296] duration metric: took 95.531679ms for postStartSetup
	I0414 14:17:44.094276 1853877 main.go:141] libmachine: (addons-885191) Calling .GetConfigRaw
	I0414 14:17:44.095059 1853877 main.go:141] libmachine: (addons-885191) Calling .GetIP
	I0414 14:17:44.097898 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.098248 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.098274 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.098614 1853877 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/config.json ...
	I0414 14:17:44.098844 1853877 start.go:128] duration metric: took 27.915758559s to createHost
	I0414 14:17:44.098873 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:44.101176 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.101503 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.101542 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.101745 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:44.101943 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:44.102126 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:44.102233 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:44.102427 1853877 main.go:141] libmachine: Using SSH client type: native
	I0414 14:17:44.102694 1853877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0414 14:17:44.102706 1853877 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 14:17:44.211453 1853877 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640264.188629712
	
	I0414 14:17:44.211512 1853877 fix.go:216] guest clock: 1744640264.188629712
	I0414 14:17:44.211519 1853877 fix.go:229] Guest: 2025-04-14 14:17:44.188629712 +0000 UTC Remote: 2025-04-14 14:17:44.098859075 +0000 UTC m=+28.032515619 (delta=89.770637ms)
	I0414 14:17:44.211555 1853877 fix.go:200] guest clock delta is within tolerance: 89.770637ms
	I0414 14:17:44.211570 1853877 start.go:83] releasing machines lock for "addons-885191", held for 28.028603188s
	I0414 14:17:44.211604 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:44.211895 1853877 main.go:141] libmachine: (addons-885191) Calling .GetIP
	I0414 14:17:44.214802 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.215166 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.215195 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.215359 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:44.215880 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:44.216112 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:17:44.216251 1853877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 14:17:44.216307 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:44.216352 1853877 ssh_runner.go:195] Run: cat /version.json
	I0414 14:17:44.216380 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:17:44.219079 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.219220 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.219478 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.219508 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.219584 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:44.219617 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:44.219665 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:44.219807 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:17:44.219873 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:44.220008 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:44.220025 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:17:44.220176 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:17:44.220198 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:17:44.220362 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:17:44.324676 1853877 ssh_runner.go:195] Run: systemctl --version
	I0414 14:17:44.331477 1853877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 14:17:45.095821 1853877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 14:17:45.103204 1853877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 14:17:45.103289 1853877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 14:17:45.121783 1853877 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 14:17:45.121808 1853877 start.go:495] detecting cgroup driver to use...
	I0414 14:17:45.121887 1853877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 14:17:45.140556 1853877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 14:17:45.157023 1853877 docker.go:217] disabling cri-docker service (if available) ...
	I0414 14:17:45.157089 1853877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 14:17:45.177103 1853877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 14:17:45.194306 1853877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 14:17:45.326418 1853877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 14:17:45.499364 1853877 docker.go:233] disabling docker service ...
	I0414 14:17:45.499442 1853877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 14:17:45.516027 1853877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 14:17:45.529635 1853877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 14:17:45.662499 1853877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 14:17:45.782251 1853877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 14:17:45.796808 1853877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 14:17:45.816316 1853877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 14:17:45.816385 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.828012 1853877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 14:17:45.828093 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.839993 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.852396 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.864021 1853877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 14:17:45.876425 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.887900 1853877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.906313 1853877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 14:17:45.917399 1853877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 14:17:45.927940 1853877 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 14:17:45.928022 1853877 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 14:17:45.942821 1853877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 14:17:45.953712 1853877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:17:46.069984 1853877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 14:17:46.168046 1853877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 14:17:46.168142 1853877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 14:17:46.173028 1853877 start.go:563] Will wait 60s for crictl version
	I0414 14:17:46.173124 1853877 ssh_runner.go:195] Run: which crictl
	I0414 14:17:46.177237 1853877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 14:17:46.218880 1853877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 14:17:46.219022 1853877 ssh_runner.go:195] Run: crio --version
	I0414 14:17:46.247970 1853877 ssh_runner.go:195] Run: crio --version
	I0414 14:17:46.282788 1853877 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 14:17:46.284225 1853877 main.go:141] libmachine: (addons-885191) Calling .GetIP
	I0414 14:17:46.287129 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:46.287487 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:17:46.287510 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:17:46.287735 1853877 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 14:17:46.292257 1853877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:17:46.305505 1853877 kubeadm.go:883] updating cluster {Name:addons-885191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-885191 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 14:17:46.305628 1853877 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 14:17:46.305670 1853877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:17:46.342067 1853877 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 14:17:46.342175 1853877 ssh_runner.go:195] Run: which lz4
	I0414 14:17:46.346743 1853877 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 14:17:46.351183 1853877 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 14:17:46.351226 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 14:17:47.830421 1853877 crio.go:462] duration metric: took 1.483718258s to copy over tarball
	I0414 14:17:47.830515 1853877 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 14:17:50.115441 1853877 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.284884973s)
	I0414 14:17:50.115476 1853877 crio.go:469] duration metric: took 2.285018264s to extract the tarball
	I0414 14:17:50.115489 1853877 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 14:17:50.154453 1853877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 14:17:50.200296 1853877 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 14:17:50.200325 1853877 cache_images.go:84] Images are preloaded, skipping loading
	I0414 14:17:50.200334 1853877 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.32.2 crio true true} ...
	I0414 14:17:50.200457 1853877 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-885191 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-885191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 14:17:50.200525 1853877 ssh_runner.go:195] Run: crio config
	I0414 14:17:50.245086 1853877 cni.go:84] Creating CNI manager for ""
	I0414 14:17:50.245112 1853877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:17:50.245125 1853877 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 14:17:50.245150 1853877 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885191 NodeName:addons-885191 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 14:17:50.245275 1853877 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-885191"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 14:17:50.245346 1853877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 14:17:50.255963 1853877 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 14:17:50.256067 1853877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 14:17:50.266759 1853877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0414 14:17:50.286534 1853877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 14:17:50.304070 1853877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0414 14:17:50.321809 1853877 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0414 14:17:50.325912 1853877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 14:17:50.339503 1853877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:17:50.467287 1853877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:17:50.485694 1853877 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191 for IP: 192.168.39.123
	I0414 14:17:50.485720 1853877 certs.go:194] generating shared ca certs ...
	I0414 14:17:50.485757 1853877 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.485952 1853877 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 14:17:50.520185 1853877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt ...
	I0414 14:17:50.520222 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt: {Name:mk8de26027adc16e6ad5bc48a7f21295d6198a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.520434 1853877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key ...
	I0414 14:17:50.520450 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key: {Name:mk76fa11b27a9ebcf4d1c9478491646907afbbbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.520557 1853877 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 14:17:50.616585 1853877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt ...
	I0414 14:17:50.616619 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt: {Name:mk77556668d6e2b4cb3b52b678cd5c5d94ce5add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.616839 1853877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key ...
	I0414 14:17:50.616856 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key: {Name:mkf8887f6a4023cb229516013d28f4fdd2091e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.616957 1853877 certs.go:256] generating profile certs ...
	I0414 14:17:50.617037 1853877 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.key
	I0414 14:17:50.617069 1853877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt with IP's: []
	I0414 14:17:50.866972 1853877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt ...
	I0414 14:17:50.867013 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: {Name:mkd458f4e5731728382bc081c47c967c03000410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.867220 1853877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.key ...
	I0414 14:17:50.867261 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.key: {Name:mkec15f1fff9240dc58fbf69f3d3ae77a277e739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:50.867393 1853877 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key.c280cc93
	I0414 14:17:50.867417 1853877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt.c280cc93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I0414 14:17:51.252293 1853877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt.c280cc93 ...
	I0414 14:17:51.252334 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt.c280cc93: {Name:mke6fff459a28b01dbb04b629f9c9c4484c44200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:51.252583 1853877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key.c280cc93 ...
	I0414 14:17:51.252607 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key.c280cc93: {Name:mk6bec02754d179e1b62f332de8081a4fdfc3441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:51.252717 1853877 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt.c280cc93 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt
	I0414 14:17:51.252835 1853877 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key.c280cc93 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key
	I0414 14:17:51.252916 1853877 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.key
	I0414 14:17:51.252973 1853877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.crt with IP's: []
	I0414 14:17:51.536118 1853877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.crt ...
	I0414 14:17:51.536161 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.crt: {Name:mkedfa1663489588428a41f92e6d030e136b3b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:51.536380 1853877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.key ...
	I0414 14:17:51.536401 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.key: {Name:mk2b6313fa42b1ac7b1dd764a063bf2f1ace7f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:17:51.536645 1853877 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 14:17:51.536733 1853877 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 14:17:51.536769 1853877 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 14:17:51.536808 1853877 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 14:17:51.537595 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 14:17:51.567995 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 14:17:51.594052 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 14:17:51.620003 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 14:17:51.646241 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 14:17:51.672140 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 14:17:51.698010 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 14:17:51.724993 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 14:17:51.750329 1853877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 14:17:51.776724 1853877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 14:17:51.794372 1853877 ssh_runner.go:195] Run: openssl version
	I0414 14:17:51.800611 1853877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 14:17:51.812236 1853877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:17:51.817111 1853877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:17:51.817179 1853877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 14:17:51.823387 1853877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 14:17:51.835857 1853877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 14:17:51.840293 1853877 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 14:17:51.840358 1853877 kubeadm.go:392] StartCluster: {Name:addons-885191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-885191 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:17:51.840431 1853877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 14:17:51.840489 1853877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 14:17:51.878408 1853877 cri.go:89] found id: ""
	I0414 14:17:51.878507 1853877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 14:17:51.889300 1853877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 14:17:51.899316 1853877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 14:17:51.909234 1853877 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 14:17:51.909267 1853877 kubeadm.go:157] found existing configuration files:
	
	I0414 14:17:51.909323 1853877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 14:17:51.918794 1853877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 14:17:51.918882 1853877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 14:17:51.928583 1853877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 14:17:51.938194 1853877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 14:17:51.938258 1853877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 14:17:51.948173 1853877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 14:17:51.960584 1853877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 14:17:51.960648 1853877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 14:17:51.971139 1853877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 14:17:51.982128 1853877 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 14:17:51.982192 1853877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 14:17:51.994428 1853877 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 14:17:52.059260 1853877 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 14:17:52.059368 1853877 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 14:17:52.172233 1853877 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 14:17:52.172388 1853877 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 14:17:52.172498 1853877 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 14:17:52.181162 1853877 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 14:17:52.273316 1853877 out.go:235]   - Generating certificates and keys ...
	I0414 14:17:52.273502 1853877 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 14:17:52.273607 1853877 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 14:17:52.307804 1853877 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 14:17:52.389296 1853877 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 14:17:52.679167 1853877 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 14:17:52.768178 1853877 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 14:17:52.873854 1853877 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 14:17:52.874025 1853877 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-885191 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0414 14:17:53.098559 1853877 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 14:17:53.098780 1853877 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-885191 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0414 14:17:53.228280 1853877 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 14:17:53.290233 1853877 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 14:17:53.484873 1853877 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 14:17:53.484974 1853877 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 14:17:53.597641 1853877 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 14:17:53.945256 1853877 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 14:17:54.294054 1853877 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 14:17:54.486576 1853877 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 14:17:54.730698 1853877 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 14:17:54.730816 1853877 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 14:17:54.731158 1853877 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 14:17:54.733676 1853877 out.go:235]   - Booting up control plane ...
	I0414 14:17:54.733829 1853877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 14:17:54.733953 1853877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 14:17:54.734064 1853877 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 14:17:54.751153 1853877 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 14:17:54.760160 1853877 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 14:17:54.760238 1853877 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 14:17:54.902905 1853877 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 14:17:54.903042 1853877 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 14:17:55.404958 1853877 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.316144ms
	I0414 14:17:55.405079 1853877 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 14:18:00.404154 1853877 kubeadm.go:310] [api-check] The API server is healthy after 5.00132163s
	I0414 14:18:00.422798 1853877 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 14:18:00.439099 1853877 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 14:18:00.470597 1853877 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 14:18:00.470868 1853877 kubeadm.go:310] [mark-control-plane] Marking the node addons-885191 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 14:18:00.483909 1853877 kubeadm.go:310] [bootstrap-token] Using token: 0kt55d.qu9v4jwx39mcm2io
	I0414 14:18:00.486265 1853877 out.go:235]   - Configuring RBAC rules ...
	I0414 14:18:00.486426 1853877 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 14:18:00.491364 1853877 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 14:18:00.498999 1853877 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 14:18:00.503286 1853877 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 14:18:00.508198 1853877 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 14:18:00.516659 1853877 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 14:18:00.818234 1853877 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 14:18:01.246402 1853877 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 14:18:01.812736 1853877 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 14:18:01.813753 1853877 kubeadm.go:310] 
	I0414 14:18:01.813809 1853877 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 14:18:01.813814 1853877 kubeadm.go:310] 
	I0414 14:18:01.813939 1853877 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 14:18:01.813984 1853877 kubeadm.go:310] 
	I0414 14:18:01.814028 1853877 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 14:18:01.814103 1853877 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 14:18:01.814182 1853877 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 14:18:01.814195 1853877 kubeadm.go:310] 
	I0414 14:18:01.814242 1853877 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 14:18:01.814248 1853877 kubeadm.go:310] 
	I0414 14:18:01.814285 1853877 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 14:18:01.814291 1853877 kubeadm.go:310] 
	I0414 14:18:01.814359 1853877 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 14:18:01.814493 1853877 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 14:18:01.814609 1853877 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 14:18:01.814624 1853877 kubeadm.go:310] 
	I0414 14:18:01.814746 1853877 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 14:18:01.814834 1853877 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 14:18:01.814841 1853877 kubeadm.go:310] 
	I0414 14:18:01.814944 1853877 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0kt55d.qu9v4jwx39mcm2io \
	I0414 14:18:01.815115 1853877 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 14:18:01.815160 1853877 kubeadm.go:310] 	--control-plane 
	I0414 14:18:01.815170 1853877 kubeadm.go:310] 
	I0414 14:18:01.815282 1853877 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 14:18:01.815296 1853877 kubeadm.go:310] 
	I0414 14:18:01.815372 1853877 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0kt55d.qu9v4jwx39mcm2io \
	I0414 14:18:01.815493 1853877 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 14:18:01.816320 1853877 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 14:18:01.816462 1853877 cni.go:84] Creating CNI manager for ""
	I0414 14:18:01.816483 1853877 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:18:01.818240 1853877 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 14:18:01.819409 1853877 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 14:18:01.831186 1853877 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 14:18:01.850764 1853877 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 14:18:01.850889 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:01.850906 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885191 minikube.k8s.io/updated_at=2025_04_14T14_18_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=addons-885191 minikube.k8s.io/primary=true
	I0414 14:18:01.870095 1853877 ops.go:34] apiserver oom_adj: -16
	I0414 14:18:02.016560 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:02.516936 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:03.016916 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:03.517053 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:04.017514 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:04.517543 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:05.017557 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:05.517611 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:06.017632 1853877 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 14:18:06.149008 1853877 kubeadm.go:1113] duration metric: took 4.298204531s to wait for elevateKubeSystemPrivileges
	I0414 14:18:06.149055 1853877 kubeadm.go:394] duration metric: took 14.308701193s to StartCluster
	I0414 14:18:06.149084 1853877 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:18:06.149235 1853877 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:18:06.149665 1853877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 14:18:06.149913 1853877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 14:18:06.149939 1853877 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 14:18:06.150017 1853877 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 14:18:06.150177 1853877 addons.go:69] Setting yakd=true in profile "addons-885191"
	I0414 14:18:06.150210 1853877 addons.go:238] Setting addon yakd=true in "addons-885191"
	I0414 14:18:06.150216 1853877 config.go:182] Loaded profile config "addons-885191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:18:06.150220 1853877 addons.go:69] Setting registry=true in profile "addons-885191"
	I0414 14:18:06.150224 1853877 addons.go:69] Setting ingress=true in profile "addons-885191"
	I0414 14:18:06.150228 1853877 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885191"
	I0414 14:18:06.150256 1853877 addons.go:69] Setting volcano=true in profile "addons-885191"
	I0414 14:18:06.150261 1853877 addons.go:238] Setting addon registry=true in "addons-885191"
	I0414 14:18:06.150265 1853877 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885191"
	I0414 14:18:06.150269 1853877 addons.go:69] Setting default-storageclass=true in profile "addons-885191"
	I0414 14:18:06.150258 1853877 addons.go:238] Setting addon ingress=true in "addons-885191"
	I0414 14:18:06.150283 1853877 addons.go:69] Setting inspektor-gadget=true in profile "addons-885191"
	I0414 14:18:06.150293 1853877 addons.go:69] Setting gcp-auth=true in profile "addons-885191"
	I0414 14:18:06.150296 1853877 addons.go:238] Setting addon inspektor-gadget=true in "addons-885191"
	I0414 14:18:06.150291 1853877 addons.go:69] Setting storage-provisioner=true in profile "addons-885191"
	I0414 14:18:06.150304 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150308 1853877 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-885191"
	I0414 14:18:06.150314 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150324 1853877 mustload.go:65] Loading cluster: addons-885191
	I0414 14:18:06.150337 1853877 addons.go:238] Setting addon storage-provisioner=true in "addons-885191"
	I0414 14:18:06.150339 1853877 addons.go:69] Setting ingress-dns=true in profile "addons-885191"
	I0414 14:18:06.150339 1853877 addons.go:69] Setting cloud-spanner=true in profile "addons-885191"
	I0414 14:18:06.150352 1853877 addons.go:238] Setting addon ingress-dns=true in "addons-885191"
	I0414 14:18:06.150355 1853877 addons.go:238] Setting addon cloud-spanner=true in "addons-885191"
	I0414 14:18:06.150389 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150402 1853877 addons.go:69] Setting volumesnapshots=true in profile "addons-885191"
	I0414 14:18:06.150406 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150419 1853877 addons.go:238] Setting addon volumesnapshots=true in "addons-885191"
	I0414 14:18:06.150441 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150613 1853877 config.go:182] Loaded profile config "addons-885191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:18:06.150269 1853877 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885191"
	I0414 14:18:06.150837 1853877 addons.go:69] Setting metrics-server=true in profile "addons-885191"
	I0414 14:18:06.150264 1853877 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-885191"
	I0414 14:18:06.150852 1853877 addons.go:238] Setting addon metrics-server=true in "addons-885191"
	I0414 14:18:06.150851 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.150864 1853877 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-885191"
	I0414 14:18:06.150871 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150893 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.150904 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150910 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.150946 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151006 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.151045 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150270 1853877 addons.go:238] Setting addon volcano=true in "addons-885191"
	I0414 14:18:06.151222 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.151234 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.151282 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.150836 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.151343 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150331 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.151415 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151293 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151588 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.151610 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150282 1853877 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885191"
	I0414 14:18:06.151711 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.151740 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150247 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.152064 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.152103 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.152137 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.152173 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151250 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.152740 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150392 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.159514 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.162439 1853877 out.go:177] * Verifying Kubernetes components...
	I0414 14:18:06.168895 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151271 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.169036 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.150333 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.151248 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.169327 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.151264 1853877 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885191"
	I0414 14:18:06.169389 1853877 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-885191"
	I0414 14:18:06.170007 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.171062 1853877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 14:18:06.173808 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0414 14:18:06.174056 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I0414 14:18:06.174315 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0414 14:18:06.174642 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.174673 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.175201 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.175222 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.175295 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.175463 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.175475 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.175719 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.175739 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.175793 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.176097 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43985
	I0414 14:18:06.176097 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.176263 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.176564 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.176639 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.176827 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.176863 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.182791 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46369
	I0414 14:18:06.183052 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.183108 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.183118 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.183566 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.183599 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.183671 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.183691 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.183808 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.184310 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.184384 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.184947 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.184966 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.185055 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.185907 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.185923 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0414 14:18:06.185961 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.185997 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.186604 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.186631 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.187029 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.187551 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.187574 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.188013 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.188201 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.190267 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.196332 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0414 14:18:06.197065 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.197652 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.197675 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.198129 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.198739 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.198786 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.213731 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0414 14:18:06.214571 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.215126 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.215186 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.215669 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.215896 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.218048 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.218421 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:06.218434 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:06.218675 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:06.218688 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:06.218697 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:06.218704 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:06.218964 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:06.218975 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	W0414 14:18:06.219093 1853877 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 14:18:06.227216 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.227264 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.228972 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0414 14:18:06.229837 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.230573 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.230599 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.230706 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0414 14:18:06.231098 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.231175 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.231909 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.231929 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.232308 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.232326 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.232660 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0414 14:18:06.232919 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I0414 14:18:06.233457 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.233467 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.234200 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.234255 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.235032 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.235467 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0414 14:18:06.235746 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.235761 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.235897 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.235909 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.236182 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.236314 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.236827 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.236867 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.237021 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36249
	I0414 14:18:06.237486 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.237512 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.237627 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.238101 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.238129 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.238202 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.238263 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0414 14:18:06.238320 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.238412 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.239039 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.239141 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.239588 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.239658 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.240225 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.240251 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.240531 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.240619 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0414 14:18:06.240874 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.240898 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.241265 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.241427 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.241515 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.242247 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.242272 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.242664 1853877 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 14:18:06.242742 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.243345 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.243396 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.244245 1853877 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:18:06.244269 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 14:18:06.244292 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.246274 1853877 addons.go:238] Setting addon default-storageclass=true in "addons-885191"
	I0414 14:18:06.246325 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.246743 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.246809 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.248506 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.249229 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.249262 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.249501 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.249708 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.249865 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.250005 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.261214 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0414 14:18:06.261914 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0414 14:18:06.262134 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.262646 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.263261 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.263282 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.263809 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.264032 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.266096 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0414 14:18:06.266711 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.267474 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.267494 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.268154 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.268388 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.270797 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.272004 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.272100 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0414 14:18:06.272836 1853877 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0414 14:18:06.273200 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0414 14:18:06.273650 1853877 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 14:18:06.273887 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.274149 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0414 14:18:06.274640 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.274722 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.274727 1853877 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 14:18:06.274752 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 14:18:06.274775 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.275465 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.275485 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.275561 1853877 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 14:18:06.275587 1853877 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 14:18:06.275612 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.275978 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.276274 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.276302 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.276317 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.276815 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.277439 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.277487 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.277767 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0414 14:18:06.278580 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.278603 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.278977 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.278997 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.279190 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.279422 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.279588 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.279643 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.279935 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.279955 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.280099 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.280124 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.280153 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.281322 1853877 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-885191"
	I0414 14:18:06.281378 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:06.281796 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.281838 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.282140 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.282260 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.282287 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.282318 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.282329 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.282288 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.282361 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.282515 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.282590 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.282630 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.282669 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.282969 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.283016 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.283138 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.283228 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.284431 1853877 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 14:18:06.286003 1853877 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 14:18:06.286025 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 14:18:06.286048 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.286218 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.286268 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.288483 1853877 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.39.0
	I0414 14:18:06.288565 1853877 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 14:18:06.289008 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41861
	I0414 14:18:06.289421 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.289477 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.289914 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.289940 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.289948 1853877 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 14:18:06.289995 1853877 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 14:18:06.290027 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.290075 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.290104 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.290637 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.290672 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.290833 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.290903 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.291160 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.291363 1853877 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 14:18:06.292575 1853877 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 14:18:06.292589 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 14:18:06.292607 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.292723 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0414 14:18:06.293123 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.294202 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.296098 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.296123 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.297103 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.297579 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.297602 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.298500 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.298595 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.298959 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.299395 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.299418 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.299449 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.299813 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.299873 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.300534 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.300638 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I0414 14:18:06.300843 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0414 14:18:06.301254 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.301772 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.301902 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.302172 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.302284 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.302301 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.302315 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39883
	I0414 14:18:06.302974 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.303021 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.302976 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.303576 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.303594 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.303963 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.304023 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.304550 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.304605 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.305006 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.305551 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.305569 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.305643 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.306017 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41367
	I0414 14:18:06.306463 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.306643 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.306804 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0414 14:18:06.307033 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.307734 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.307763 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.308280 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.308300 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.308353 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.308369 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.308732 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.308780 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.308894 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.309149 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.309478 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.309586 1853877 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 14:18:06.309692 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 14:18:06.310990 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.311193 1853877 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 14:18:06.311856 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.312116 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 14:18:06.312177 1853877 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 14:18:06.312298 1853877 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 14:18:06.312377 1853877 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 14:18:06.312946 1853877 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 14:18:06.312966 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.312401 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0414 14:18:06.313575 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.313338 1853877 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 14:18:06.314017 1853877 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 14:18:06.314323 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 14:18:06.314343 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.314047 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.314428 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.314641 1853877 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 14:18:06.315114 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.315372 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 14:18:06.315396 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.315527 1853877 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 14:18:06.315968 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 14:18:06.315992 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.316533 1853877 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 14:18:06.316633 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 14:18:06.316675 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.317592 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.318061 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 14:18:06.318345 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.318397 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.318593 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.318778 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.318778 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43533
	I0414 14:18:06.318995 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.319256 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.319258 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.320057 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.320075 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.320179 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.320387 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 14:18:06.320508 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.320806 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.320826 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.320965 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.321325 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:06.321396 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:06.321941 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.321954 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.321983 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.322026 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.322040 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.322051 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.322227 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.322293 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.322706 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.322728 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.322462 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.322488 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.322967 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.322983 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.323040 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 14:18:06.323335 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.323560 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.323792 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.323957 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.324410 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 14:18:06.325490 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 14:18:06.325521 1853877 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 14:18:06.325548 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.325606 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 14:18:06.326902 1853877 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 14:18:06.328045 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 14:18:06.328069 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 14:18:06.328093 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.329637 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.330224 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.330287 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.330498 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.330850 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.331045 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.331226 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.333096 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.333602 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.333638 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.333858 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.334016 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.334136 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.334227 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	W0414 14:18:06.335494 1853877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54230->192.168.39.123:22: read: connection reset by peer
	I0414 14:18:06.335526 1853877 retry.go:31] will retry after 219.631185ms: ssh: handshake failed: read tcp 192.168.39.1:54230->192.168.39.123:22: read: connection reset by peer
	I0414 14:18:06.340237 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0414 14:18:06.340715 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.341262 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.341311 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.341597 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0414 14:18:06.341829 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.341957 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.342012 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:06.342495 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:06.342519 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:06.342942 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:06.343200 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:06.343651 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.344725 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:06.345035 1853877 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 14:18:06.345056 1853877 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 14:18:06.345071 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.345597 1853877 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 14:18:06.346875 1853877 out.go:177]   - Using image docker.io/busybox:stable
	I0414 14:18:06.348206 1853877 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 14:18:06.348227 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 14:18:06.348251 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:06.348894 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.348919 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.348938 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.349592 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.349802 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.350034 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.350388 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:06.352071 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.352677 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:06.352712 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:06.353339 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:06.355235 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:06.356203 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:06.356365 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	W0414 14:18:06.556186 1853877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54248->192.168.39.123:22: read: connection reset by peer
	I0414 14:18:06.556220 1853877 retry.go:31] will retry after 451.205828ms: ssh: handshake failed: read tcp 192.168.39.1:54248->192.168.39.123:22: read: connection reset by peer
	I0414 14:18:06.652540 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 14:18:06.729818 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 14:18:06.790893 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 14:18:06.841464 1853877 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 14:18:06.841507 1853877 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 14:18:06.842314 1853877 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 14:18:06.842340 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 14:18:06.943692 1853877 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 14:18:06.943727 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 14:18:06.950676 1853877 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 14:18:06.950719 1853877 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 14:18:06.951897 1853877 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 14:18:06.951923 1853877 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 14:18:06.963346 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 14:18:06.979059 1853877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 14:18:06.979190 1853877 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 14:18:06.986620 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 14:18:06.992952 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 14:18:07.009535 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 14:18:07.015116 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 14:18:07.118053 1853877 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 14:18:07.118083 1853877 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 14:18:07.131558 1853877 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 14:18:07.131600 1853877 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 14:18:07.140439 1853877 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 14:18:07.140472 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 14:18:07.199318 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 14:18:07.208240 1853877 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 14:18:07.208273 1853877 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 14:18:07.417729 1853877 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 14:18:07.417777 1853877 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 14:18:07.455239 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 14:18:07.464735 1853877 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 14:18:07.464763 1853877 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 14:18:07.472717 1853877 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 14:18:07.472757 1853877 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 14:18:07.630888 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 14:18:07.630933 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 14:18:07.662867 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 14:18:07.666787 1853877 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 14:18:07.666816 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 14:18:07.684711 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 14:18:07.684746 1853877 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 14:18:07.952610 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 14:18:07.952641 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 14:18:08.010896 1853877 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 14:18:08.010924 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 14:18:08.031847 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 14:18:08.343915 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 14:18:08.382757 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 14:18:08.382808 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 14:18:08.725498 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 14:18:08.725538 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 14:18:08.954275 1853877 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 14:18:08.954315 1853877 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 14:18:09.146066 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 14:18:09.146094 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 14:18:09.391556 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 14:18:09.391591 1853877 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 14:18:09.790662 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 14:18:09.790698 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 14:18:10.169061 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 14:18:10.169093 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 14:18:10.490828 1853877 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 14:18:10.490858 1853877 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 14:18:10.871248 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 14:18:12.110408 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.45780758s)
	I0414 14:18:12.110462 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.380605191s)
	I0414 14:18:12.110514 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.110526 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.319586602s)
	I0414 14:18:12.110534 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.110555 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.110469 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.110597 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.110570 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.110621 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.147247969s)
	I0414 14:18:12.110643 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.110652 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.110675 1853877 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.1315763s)
	I0414 14:18:12.110749 1853877 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.13150641s)
	I0414 14:18:12.110777 1853877 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0414 14:18:12.111185 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.111211 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.111233 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111255 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111263 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.111267 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111271 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.111275 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111284 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.111290 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.111296 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.111312 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.111318 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111326 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111334 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.111339 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.111449 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111461 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111471 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.111479 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.111560 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.111589 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111596 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111783 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.111793 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.111969 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.112010 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.112018 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.112299 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:12.112326 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.112334 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.114561 1853877 node_ready.go:35] waiting up to 6m0s for node "addons-885191" to be "Ready" ...
	I0414 14:18:12.222629 1853877 node_ready.go:49] node "addons-885191" has status "Ready":"True"
	I0414 14:18:12.222658 1853877 node_ready.go:38] duration metric: took 108.054439ms for node "addons-885191" to be "Ready" ...
	I0414 14:18:12.222668 1853877 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:18:12.316119 1853877 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:12.362087 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:12.362115 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:12.362443 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:12.362467 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:12.648747 1853877 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-885191" context rescaled to 1 replicas
	I0414 14:18:13.133491 1853877 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 14:18:13.133551 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:13.137155 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:13.137610 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:13.137662 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:13.137872 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:13.138097 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:13.138328 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:13.138553 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:13.895370 1853877 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 14:18:14.097261 1853877 addons.go:238] Setting addon gcp-auth=true in "addons-885191"
	I0414 14:18:14.097329 1853877 host.go:66] Checking if "addons-885191" exists ...
	I0414 14:18:14.097648 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:14.097680 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:14.114014 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0414 14:18:14.114482 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:14.114951 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:14.114974 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:14.115326 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:14.115785 1853877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:18:14.115812 1853877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:18:14.132390 1853877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0414 14:18:14.132921 1853877 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:18:14.133450 1853877 main.go:141] libmachine: Using API Version  1
	I0414 14:18:14.133481 1853877 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:18:14.133942 1853877 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:18:14.134141 1853877 main.go:141] libmachine: (addons-885191) Calling .GetState
	I0414 14:18:14.136039 1853877 main.go:141] libmachine: (addons-885191) Calling .DriverName
	I0414 14:18:14.136402 1853877 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 14:18:14.136438 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHHostname
	I0414 14:18:14.139637 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:14.140096 1853877 main.go:141] libmachine: (addons-885191) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:91:fa", ip: ""} in network mk-addons-885191: {Iface:virbr1 ExpiryTime:2025-04-14 15:17:31 +0000 UTC Type:0 Mac:52:54:00:2a:91:fa Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:addons-885191 Clientid:01:52:54:00:2a:91:fa}
	I0414 14:18:14.140131 1853877 main.go:141] libmachine: (addons-885191) DBG | domain addons-885191 has defined IP address 192.168.39.123 and MAC address 52:54:00:2a:91:fa in network mk-addons-885191
	I0414 14:18:14.140261 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHPort
	I0414 14:18:14.140466 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHKeyPath
	I0414 14:18:14.140636 1853877 main.go:141] libmachine: (addons-885191) Calling .GetSSHUsername
	I0414 14:18:14.140782 1853877 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/addons-885191/id_rsa Username:docker}
	I0414 14:18:14.339412 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:15.107115 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.114109849s)
	I0414 14:18:15.107168 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.097595425s)
	I0414 14:18:15.107187 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107112 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.120436891s)
	I0414 14:18:15.107211 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107201 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107243 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107262 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107292 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.652027695s)
	I0414 14:18:15.107201 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.092054431s)
	I0414 14:18:15.107314 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107321 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107331 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107321 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107381 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107385 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.444484738s)
	I0414 14:18:15.107404 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107265 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.907920418s)
	I0414 14:18:15.107413 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107424 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107431 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107434 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.075546636s)
	I0414 14:18:15.107458 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.107469 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.107538 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.763587852s)
	W0414 14:18:15.107607 1853877 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 14:18:15.107635 1853877 retry.go:31] will retry after 177.273242ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 14:18:15.110336 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110357 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110377 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110382 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110390 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110390 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110408 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110416 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110424 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110455 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110465 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110475 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110483 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110488 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110492 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110494 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110496 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110503 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110509 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110517 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110514 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110531 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110541 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110517 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110559 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110568 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110576 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110576 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110584 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110499 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110584 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110628 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110637 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110643 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.110336 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.110698 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.110709 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.110717 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.110724 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.111300 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.111334 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.111359 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111366 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111380 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111382 1853877 addons.go:479] Verifying addon ingress=true in "addons-885191"
	I0414 14:18:15.111390 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111427 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.111449 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111456 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111602 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.111637 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111647 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111662 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111672 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111681 1853877 addons.go:479] Verifying addon metrics-server=true in "addons-885191"
	I0414 14:18:15.111721 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111728 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111827 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.111900 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111909 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111910 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.111923 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.111939 1853877 addons.go:479] Verifying addon registry=true in "addons-885191"
	I0414 14:18:15.114010 1853877 out.go:177] * Verifying ingress addon...
	I0414 14:18:15.115679 1853877 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885191 service yakd-dashboard -n yakd-dashboard
	
	I0414 14:18:15.115681 1853877 out.go:177] * Verifying registry addon...
	I0414 14:18:15.116479 1853877 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 14:18:15.117642 1853877 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 14:18:15.166310 1853877 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 14:18:15.166340 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:15.174553 1853877 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 14:18:15.174584 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:15.218650 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:15.218676 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:15.219048 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:15.219059 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:15.219076 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:15.286098 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 14:18:15.622147 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:15.622520 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:16.122427 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:16.122462 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:16.629138 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:16.629147 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:16.834263 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:17.153191 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:17.153395 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:17.272368 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.401046933s)
	I0414 14:18:17.272445 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:17.272445 1853877 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.136011079s)
	I0414 14:18:17.272462 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:17.272826 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:17.272893 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:17.272934 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:17.272942 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:17.272960 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:17.274105 1853877 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 14:18:17.275144 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:17.275174 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:17.275188 1853877 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-885191"
	I0414 14:18:17.275188 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:17.276449 1853877 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 14:18:17.276466 1853877 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 14:18:17.278107 1853877 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 14:18:17.278135 1853877 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 14:18:17.278853 1853877 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 14:18:17.316542 1853877 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 14:18:17.316579 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:17.440566 1853877 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 14:18:17.440604 1853877 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 14:18:17.516748 1853877 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 14:18:17.516787 1853877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 14:18:17.612387 1853877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 14:18:17.620604 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:17.621648 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:17.727530 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.441374672s)
	I0414 14:18:17.727604 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:17.727618 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:17.727995 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:17.728047 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:17.728063 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:17.728073 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:17.728387 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:17.728411 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:17.728414 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:17.782689 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:18.125461 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:18.125627 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:18.283116 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:18.714627 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:18.714651 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:18.838497 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:18.842633 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:19.162707 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:19.162897 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:19.189098 1853877 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.576653148s)
	I0414 14:18:19.189190 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:19.189208 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:19.189552 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:19.189565 1853877 main.go:141] libmachine: (addons-885191) DBG | Closing plugin on server side
	I0414 14:18:19.189575 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:19.189599 1853877 main.go:141] libmachine: Making call to close driver server
	I0414 14:18:19.189607 1853877 main.go:141] libmachine: (addons-885191) Calling .Close
	I0414 14:18:19.189944 1853877 main.go:141] libmachine: Successfully made call to close driver server
	I0414 14:18:19.189966 1853877 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 14:18:19.190964 1853877 addons.go:479] Verifying addon gcp-auth=true in "addons-885191"
	I0414 14:18:19.193244 1853877 out.go:177] * Verifying gcp-auth addon...
	I0414 14:18:19.195538 1853877 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 14:18:19.232700 1853877 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 14:18:19.232735 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:19.320442 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:19.621387 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:19.622609 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:19.721370 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:19.823652 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:20.123196 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:20.123905 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:20.221982 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:20.284720 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:20.619722 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:20.621454 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:20.699799 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:20.783041 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:21.121340 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:21.121446 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:21.198875 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:21.282558 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:21.322135 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:21.717687 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:21.717994 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:21.718019 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:21.786383 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:22.120949 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:22.121621 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:22.198504 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:22.283935 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:22.621290 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:22.621812 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:22.698839 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:22.783692 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:23.122055 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:23.122299 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:23.199159 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:23.282846 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:23.322617 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:23.620844 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:23.620997 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:23.701144 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:23.781947 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:24.121166 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:24.121342 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:24.199716 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:24.283310 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:24.621691 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:24.621781 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:24.698811 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:24.783389 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:25.121366 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:25.121794 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:25.199254 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:25.285480 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:25.620995 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:25.621189 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:25.699184 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:25.783298 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:25.824597 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:26.121181 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:26.122658 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:26.199222 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:26.284406 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:26.619894 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:26.620338 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:26.699094 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:26.782772 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:27.121719 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:27.121758 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:27.200157 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:27.284557 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:27.620920 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:27.621092 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:27.698954 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:27.782542 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:28.125443 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:28.125794 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:28.198464 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:28.282495 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:28.321340 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:28.619954 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:28.621689 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:28.698777 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:28.783004 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:29.121266 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:29.121350 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:29.199643 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:29.284410 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:29.619883 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:29.620620 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:29.698648 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:29.782834 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:30.134018 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:30.134111 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:30.199661 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:30.282894 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:30.322567 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:30.621005 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:30.622077 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:30.699384 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:30.782574 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:31.122499 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:31.122666 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:31.199163 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:31.285551 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:31.619750 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:31.621327 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:31.699580 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:31.930065 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:32.121836 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:32.122486 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:32.199928 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:32.282900 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:32.325686 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:32.620695 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:32.620889 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:32.699656 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:32.783866 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:33.120577 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:33.121397 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:33.199329 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:33.285036 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:33.621194 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:33.622050 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:33.699349 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:33.782654 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:34.123178 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:34.123364 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:34.200306 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:34.282044 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:34.621264 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:34.621457 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:34.700540 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:34.782936 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:34.822126 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:35.227360 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:35.227726 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:35.227856 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:35.285198 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:35.753150 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:35.755241 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:35.755823 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:35.783165 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:36.120348 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:36.120526 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:36.198881 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:36.283844 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:36.620916 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:36.621040 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:36.698977 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:36.782211 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:37.121050 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:37.122263 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:37.199436 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:37.283064 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:37.322753 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:37.620779 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:37.620844 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:37.720824 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:37.783087 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:38.121510 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:38.121751 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:38.198700 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:38.284994 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:38.619604 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:38.620077 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:38.699152 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:38.782123 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:39.121074 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:39.121076 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:39.199190 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:39.284856 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:39.327635 1853877 pod_ready.go:103] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:39.619993 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:39.620391 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:39.699460 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:39.783630 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:40.604494 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:40.604708 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:40.604952 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:40.606990 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:40.612899 1853877 pod_ready.go:93] pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:40.612925 1853877 pod_ready.go:82] duration metric: took 28.296761631s for pod "amd-gpu-device-plugin-qdw2j" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.612941 1853877 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5bclq" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.623795 1853877 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-5bclq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-5bclq" not found
	I0414 14:18:40.623830 1853877 pod_ready.go:82] duration metric: took 10.879239ms for pod "coredns-668d6bf9bc-5bclq" in "kube-system" namespace to be "Ready" ...
	E0414 14:18:40.623856 1853877 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-5bclq" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-5bclq" not found
	I0414 14:18:40.623865 1853877 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-64jmc" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.629243 1853877 pod_ready.go:93] pod "coredns-668d6bf9bc-64jmc" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:40.629266 1853877 pod_ready.go:82] duration metric: took 5.394641ms for pod "coredns-668d6bf9bc-64jmc" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.629279 1853877 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.639306 1853877 pod_ready.go:93] pod "etcd-addons-885191" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:40.639330 1853877 pod_ready.go:82] duration metric: took 10.043888ms for pod "etcd-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.639339 1853877 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.645143 1853877 pod_ready.go:93] pod "kube-apiserver-addons-885191" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:40.645173 1853877 pod_ready.go:82] duration metric: took 5.826587ms for pod "kube-apiserver-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.645188 1853877 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.650651 1853877 pod_ready.go:93] pod "kube-controller-manager-addons-885191" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:40.650675 1853877 pod_ready.go:82] duration metric: took 5.480408ms for pod "kube-controller-manager-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.650688 1853877 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzkkw" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:40.704292 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:40.704424 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:40.704460 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:40.783355 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:41.008173 1853877 pod_ready.go:93] pod "kube-proxy-rzkkw" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:41.008198 1853877 pod_ready.go:82] duration metric: took 357.503767ms for pod "kube-proxy-rzkkw" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:41.008208 1853877 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:41.121735 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:41.122472 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:41.199066 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:41.282780 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:41.408070 1853877 pod_ready.go:93] pod "kube-scheduler-addons-885191" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:41.408102 1853877 pod_ready.go:82] duration metric: took 399.884524ms for pod "kube-scheduler-addons-885191" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:41.408115 1853877 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:41.620438 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:41.620469 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:41.699573 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:41.782949 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:42.121262 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:42.121291 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:42.221450 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:42.283085 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:42.621142 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:42.621235 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:42.699080 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:42.782487 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:43.120459 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:43.121911 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:43.199389 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:43.282876 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:43.413978 1853877 pod_ready.go:103] pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:43.620428 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:43.622358 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:43.699726 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:43.782775 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:44.121967 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:44.121984 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:44.198805 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:44.283191 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:44.621123 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:44.621858 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:44.699097 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:44.782265 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:45.121180 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:45.121225 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:45.199239 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:45.282783 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:45.414339 1853877 pod_ready.go:103] pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:45.619654 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:45.620866 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:45.698967 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:45.782419 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:46.327907 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:46.328535 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:46.330424 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:46.330575 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:46.624960 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:46.624960 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:46.699502 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:46.782786 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:47.124905 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:47.125881 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:47.206194 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:47.284468 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:47.621137 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:47.621258 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:47.699353 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:47.783005 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:47.914133 1853877 pod_ready.go:103] pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace has status "Ready":"False"
	I0414 14:18:48.121753 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:48.121781 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:48.198633 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:48.282711 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:48.623406 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:48.623404 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:48.727318 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:48.822461 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:49.127039 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:49.127136 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:49.217971 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:49.284987 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:49.621465 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:49.621616 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:49.701042 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:49.782684 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:50.131393 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:50.131666 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:50.224346 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:50.285035 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:50.414271 1853877 pod_ready.go:93] pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:50.414300 1853877 pod_ready.go:82] duration metric: took 9.006177481s for pod "metrics-server-7fbb699795-4wklt" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:50.414315 1853877 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cgpdg" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:50.420303 1853877 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cgpdg" in "kube-system" namespace has status "Ready":"True"
	I0414 14:18:50.420328 1853877 pod_ready.go:82] duration metric: took 6.005338ms for pod "nvidia-device-plugin-daemonset-cgpdg" in "kube-system" namespace to be "Ready" ...
	I0414 14:18:50.420344 1853877 pod_ready.go:39] duration metric: took 38.197662517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 14:18:50.420379 1853877 api_server.go:52] waiting for apiserver process to appear ...
	I0414 14:18:50.420440 1853877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:18:50.457540 1853877 api_server.go:72] duration metric: took 44.307556979s to wait for apiserver process to appear ...
	I0414 14:18:50.457583 1853877 api_server.go:88] waiting for apiserver healthz status ...
	I0414 14:18:50.457611 1853877 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0414 14:18:50.462232 1853877 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0414 14:18:50.463584 1853877 api_server.go:141] control plane version: v1.32.2
	I0414 14:18:50.463612 1853877 api_server.go:131] duration metric: took 6.021215ms to wait for apiserver health ...
	I0414 14:18:50.463621 1853877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 14:18:50.468959 1853877 system_pods.go:59] 18 kube-system pods found
	I0414 14:18:50.468993 1853877 system_pods.go:61] "amd-gpu-device-plugin-qdw2j" [b211e56e-a21f-44c2-aec9-b3b5ab7e7fc7] Running
	I0414 14:18:50.468998 1853877 system_pods.go:61] "coredns-668d6bf9bc-64jmc" [f279f811-2746-47ea-ba40-130ef9246a7e] Running
	I0414 14:18:50.469007 1853877 system_pods.go:61] "csi-hostpath-attacher-0" [7bfe0e81-480d-4a33-8c1b-1d77d47dfab1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 14:18:50.469012 1853877 system_pods.go:61] "csi-hostpath-resizer-0" [ddfb077d-7366-4bc7-954b-96bf01257c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 14:18:50.469021 1853877 system_pods.go:61] "csi-hostpathplugin-6c98j" [fd632614-8bd8-4ad4-a983-45e2eda50d32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 14:18:50.469027 1853877 system_pods.go:61] "etcd-addons-885191" [c6dfcded-0aff-4db1-9454-8eccaf9beace] Running
	I0414 14:18:50.469031 1853877 system_pods.go:61] "kube-apiserver-addons-885191" [76809777-4d12-401f-ad2b-e0af96e6d08b] Running
	I0414 14:18:50.469036 1853877 system_pods.go:61] "kube-controller-manager-addons-885191" [6c5f0074-032c-442b-9e9e-ad3892773893] Running
	I0414 14:18:50.469044 1853877 system_pods.go:61] "kube-ingress-dns-minikube" [5c6d3a93-bcca-4c14-acc3-d9cd431d49ef] Running
	I0414 14:18:50.469047 1853877 system_pods.go:61] "kube-proxy-rzkkw" [01f34f39-3532-4e8e-93a0-bb1ec6904704] Running
	I0414 14:18:50.469053 1853877 system_pods.go:61] "kube-scheduler-addons-885191" [7766a2e2-29f3-4dde-b330-caeb9ccc00de] Running
	I0414 14:18:50.469056 1853877 system_pods.go:61] "metrics-server-7fbb699795-4wklt" [e0f2fc09-896e-4cb8-8c80-f2a06e7414ec] Running
	I0414 14:18:50.469062 1853877 system_pods.go:61] "nvidia-device-plugin-daemonset-cgpdg" [4c80fba9-8b9a-4a73-8daf-be4580ea3fde] Running
	I0414 14:18:50.469064 1853877 system_pods.go:61] "registry-6c88467877-glhj8" [c9af8f5b-2acb-40bb-bf80-598ad76b971c] Running
	I0414 14:18:50.469069 1853877 system_pods.go:61] "registry-proxy-8b99t" [493c08aa-c265-4baf-ac9c-36c60e189a83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 14:18:50.469081 1853877 system_pods.go:61] "snapshot-controller-68b874b76f-7spsf" [e2e5ed2a-3606-4a75-a5c1-2105d0d3d2be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 14:18:50.469086 1853877 system_pods.go:61] "snapshot-controller-68b874b76f-d5ltz" [79289428-4f28-435b-bbc2-bf8e07d446cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 14:18:50.469091 1853877 system_pods.go:61] "storage-provisioner" [7362fdb3-8bb6-441f-bb78-74251cb5cba0] Running
	I0414 14:18:50.469098 1853877 system_pods.go:74] duration metric: took 5.470769ms to wait for pod list to return data ...
	I0414 14:18:50.469108 1853877 default_sa.go:34] waiting for default service account to be created ...
	I0414 14:18:50.473195 1853877 default_sa.go:45] found service account: "default"
	I0414 14:18:50.473221 1853877 default_sa.go:55] duration metric: took 4.104766ms for default service account to be created ...
	I0414 14:18:50.473230 1853877 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 14:18:50.479680 1853877 system_pods.go:86] 18 kube-system pods found
	I0414 14:18:50.479722 1853877 system_pods.go:89] "amd-gpu-device-plugin-qdw2j" [b211e56e-a21f-44c2-aec9-b3b5ab7e7fc7] Running
	I0414 14:18:50.479728 1853877 system_pods.go:89] "coredns-668d6bf9bc-64jmc" [f279f811-2746-47ea-ba40-130ef9246a7e] Running
	I0414 14:18:50.479738 1853877 system_pods.go:89] "csi-hostpath-attacher-0" [7bfe0e81-480d-4a33-8c1b-1d77d47dfab1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0414 14:18:50.479748 1853877 system_pods.go:89] "csi-hostpath-resizer-0" [ddfb077d-7366-4bc7-954b-96bf01257c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0414 14:18:50.479760 1853877 system_pods.go:89] "csi-hostpathplugin-6c98j" [fd632614-8bd8-4ad4-a983-45e2eda50d32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 14:18:50.479766 1853877 system_pods.go:89] "etcd-addons-885191" [c6dfcded-0aff-4db1-9454-8eccaf9beace] Running
	I0414 14:18:50.479773 1853877 system_pods.go:89] "kube-apiserver-addons-885191" [76809777-4d12-401f-ad2b-e0af96e6d08b] Running
	I0414 14:18:50.479778 1853877 system_pods.go:89] "kube-controller-manager-addons-885191" [6c5f0074-032c-442b-9e9e-ad3892773893] Running
	I0414 14:18:50.479787 1853877 system_pods.go:89] "kube-ingress-dns-minikube" [5c6d3a93-bcca-4c14-acc3-d9cd431d49ef] Running
	I0414 14:18:50.479792 1853877 system_pods.go:89] "kube-proxy-rzkkw" [01f34f39-3532-4e8e-93a0-bb1ec6904704] Running
	I0414 14:18:50.479798 1853877 system_pods.go:89] "kube-scheduler-addons-885191" [7766a2e2-29f3-4dde-b330-caeb9ccc00de] Running
	I0414 14:18:50.479803 1853877 system_pods.go:89] "metrics-server-7fbb699795-4wklt" [e0f2fc09-896e-4cb8-8c80-f2a06e7414ec] Running
	I0414 14:18:50.479807 1853877 system_pods.go:89] "nvidia-device-plugin-daemonset-cgpdg" [4c80fba9-8b9a-4a73-8daf-be4580ea3fde] Running
	I0414 14:18:50.479811 1853877 system_pods.go:89] "registry-6c88467877-glhj8" [c9af8f5b-2acb-40bb-bf80-598ad76b971c] Running
	I0414 14:18:50.479815 1853877 system_pods.go:89] "registry-proxy-8b99t" [493c08aa-c265-4baf-ac9c-36c60e189a83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0414 14:18:50.479826 1853877 system_pods.go:89] "snapshot-controller-68b874b76f-7spsf" [e2e5ed2a-3606-4a75-a5c1-2105d0d3d2be] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 14:18:50.479835 1853877 system_pods.go:89] "snapshot-controller-68b874b76f-d5ltz" [79289428-4f28-435b-bbc2-bf8e07d446cb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0414 14:18:50.479841 1853877 system_pods.go:89] "storage-provisioner" [7362fdb3-8bb6-441f-bb78-74251cb5cba0] Running
	I0414 14:18:50.479851 1853877 system_pods.go:126] duration metric: took 6.61457ms to wait for k8s-apps to be running ...
	I0414 14:18:50.479861 1853877 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 14:18:50.479922 1853877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:18:50.538465 1853877 system_svc.go:56] duration metric: took 58.591008ms WaitForService to wait for kubelet
	I0414 14:18:50.538504 1853877 kubeadm.go:582] duration metric: took 44.388532437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 14:18:50.538526 1853877 node_conditions.go:102] verifying NodePressure condition ...
	I0414 14:18:50.541801 1853877 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 14:18:50.541828 1853877 node_conditions.go:123] node cpu capacity is 2
	I0414 14:18:50.541842 1853877 node_conditions.go:105] duration metric: took 3.311566ms to run NodePressure ...
	I0414 14:18:50.541857 1853877 start.go:241] waiting for startup goroutines ...
	I0414 14:18:50.621623 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:50.622247 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:50.700436 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:50.783374 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:51.154243 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:51.154575 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:51.199888 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:51.282787 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:51.619381 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:51.621203 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:51.699248 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:51.782410 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:52.119902 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:52.121343 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:52.200044 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:52.301345 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:52.623533 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:52.623698 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:52.698920 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:52.781718 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:53.120563 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 14:18:53.120850 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:53.200030 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:53.281889 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:53.620322 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:53.621326 1853877 kapi.go:107] duration metric: took 38.503682737s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 14:18:53.699601 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:53.782816 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:54.122481 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:54.393850 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:54.393970 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:54.620831 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:54.699545 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:54.783482 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:55.119908 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:55.199285 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:55.282807 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:55.619818 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:55.698484 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:55.782883 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:56.121997 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:56.222282 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:56.285689 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:56.620900 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:56.698162 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:56.782260 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:57.120346 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:57.199046 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:57.282045 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:57.620194 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:57.698956 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:57.782283 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:58.120684 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:58.198562 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:58.282766 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:58.621762 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:58.698511 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:58.785402 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:18:59.120588 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:18:59.199322 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:18:59.283082 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:00.013120 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:00.013712 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:00.015272 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:00.121191 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:00.199544 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:00.291465 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:00.619882 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:00.698579 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:00.782985 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:01.120405 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:01.199286 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:01.283317 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:01.621047 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:01.699194 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:01.782830 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:02.120336 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:02.199100 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:02.282610 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:02.620495 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:02.699695 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:02.783756 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:03.119661 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:03.199576 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:03.283440 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:03.624635 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:03.724259 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:03.782469 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:04.121058 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:04.199334 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:04.282494 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:04.619931 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:04.698299 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:04.782756 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:05.119829 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:05.198346 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:05.282728 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:05.620530 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:05.721335 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:05.782187 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:06.119684 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:06.199999 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:06.282770 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:06.621212 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:06.699120 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:06.782876 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:07.120748 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:07.199929 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:07.284263 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:07.621450 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:07.701311 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:07.785492 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:08.120200 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:08.199772 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:08.283352 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:08.620217 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:08.699232 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:08.785973 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:09.121274 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:09.221077 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:09.282761 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:09.620867 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:09.698408 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:09.782308 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:10.121135 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:10.199027 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:10.282696 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:10.620226 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:10.720659 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:10.821756 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:11.120645 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:11.199662 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:11.282420 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:11.620310 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:11.699163 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:11.782986 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:12.121202 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:12.199281 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:12.285752 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:12.620316 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:12.699446 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:12.783732 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:13.120207 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:13.199584 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:13.283615 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:13.815286 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:13.815371 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:13.815672 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:14.124113 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:14.203034 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:14.285245 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:14.620801 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:14.698526 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:14.784324 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:15.125785 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:15.224568 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:15.325766 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:15.619558 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:15.699195 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:15.783659 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:16.120008 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:16.198852 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:16.283715 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:16.621557 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:16.722424 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:16.785677 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:17.120421 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:17.199609 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:17.283028 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:17.620572 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:17.699271 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:17.782683 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:18.120299 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:18.205042 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:18.304697 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:18.620307 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:18.699480 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:18.782941 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 14:19:19.120753 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:19.198410 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:19.282689 1853877 kapi.go:107] duration metric: took 1m2.003828616s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 14:19:19.621207 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:19.699556 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:20.121063 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:20.198751 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:20.619893 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:20.698533 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:21.120377 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:21.200523 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:21.619714 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:21.698856 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:22.121691 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:22.198876 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:22.621520 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:22.699580 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:23.120203 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:23.198841 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:23.619947 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:23.698414 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:24.120644 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:24.199282 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:24.746184 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:24.746390 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:25.120609 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:25.199694 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:25.620659 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:25.699704 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:26.120984 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:26.199125 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:26.620790 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:26.698516 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:27.120291 1853877 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 14:19:27.199237 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:27.622158 1853877 kapi.go:107] duration metric: took 1m12.505671713s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 14:19:27.699025 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:28.199795 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:28.699846 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:29.202239 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:29.699086 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:30.200127 1853877 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 14:19:30.700777 1853877 kapi.go:107] duration metric: took 1m11.505238183s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 14:19:30.703021 1853877 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-885191 cluster.
	I0414 14:19:30.704469 1853877 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 14:19:30.705752 1853877 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 14:19:30.707374 1853877 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, cloud-spanner, storage-provisioner-rancher, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0414 14:19:30.708691 1853877 addons.go:514] duration metric: took 1m24.558676608s for enable addons: enabled=[storage-provisioner ingress-dns cloud-spanner storage-provisioner-rancher amd-gpu-device-plugin nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0414 14:19:30.708747 1853877 start.go:246] waiting for cluster config update ...
	I0414 14:19:30.708777 1853877 start.go:255] writing updated cluster config ...
	I0414 14:19:30.709083 1853877 ssh_runner.go:195] Run: rm -f paused
	I0414 14:19:30.765706 1853877 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 14:19:30.767795 1853877 out.go:177] * Done! kubectl is now configured to use "addons-885191" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.910211524Z" level=info msg="Started container" PID=12178 containerID=b5ce83cdd4c652742b5593d5c7beab96cd1a29846f593e940d46f7a0cae25919 description=default/hello-world-app-7d9564db4-bp4l4/hello-world-app file="server/container_start.go:115" id=36483b59-62f4-4243-b4fc-8fe113034c2c name=/runtime.v1.RuntimeService/StartContainer sandboxID=58c922a8849c7b09e9a7ae1eb007fe3acb3b9b116af43926de5f7bdaf73f1758
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.924260813Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=36483b59-62f4-4243-b4fc-8fe113034c2c name=/runtime.v1.RuntimeService/StartContainer
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.954068944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4008e5c9-0449-4aa1-946a-485da8ab9c3b name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.954164453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4008e5c9-0449-4aa1-946a-485da8ab9c3b name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.955725874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d07191b-0a6a-4e86-9459-0038f845950a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.957529929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640557957497481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604413,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d07191b-0a6a-4e86-9459-0038f845950a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.958179647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c2078a4-5817-4f81-92b9-063a4af3a1ce name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.958281715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c2078a4-5817-4f81-92b9-063a4af3a1ce name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.958855355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5ce83cdd4c652742b5593d5c7beab96cd1a29846f593e940d46f7a0cae25919,PodSandboxId:58c922a8849c7b09e9a7ae1eb007fe3acb3b9b116af43926de5f7bdaf73f1758,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744640557813877012,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-bp4l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac4bda66-8184-4f99-b2cb-a9af21219fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6716c7646ee72c4a6491d350fdc7aa47712462aa913b0cc3bf69f2e72ef482db,PodSandboxId:e9fba8d6c4e0ea827d82a8d3f4acbc2048e6a54e9b71152599fbe1f834d5a68f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744640418407693904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48d650a7-bbfb-493e-87b8-da6b06272724,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb355482d7e2d399add86353e7c885cc77463315b1c98e28abfcb1c9fe677595,PodSandboxId:dc3dfa266574b673619a6e0d504ce8b4156802a9f2dcbafbad7b304ec7eaf430,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744640374264840027,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f8d70-e12e-4fee-9a
e6-742fa4df29ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47d2934d07d5908d2aaa8d465d889147fb1b148cbd68656c06ef4131addc0d2,PodSandboxId:7c5461165fbf6d8b67143da4d643cee4c3605a179395184cf76fcd8d2626d1bf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744640366305433775,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-lr2hs,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: df4adb6d-7f9e-4948-819e-9f42b6218728,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f43a5be0747cdd754ccc586dd8d375a1d373d77e261f10e4fc55f1b61b763d4,PodSandboxId:93b8d97a363bfbb79f58bed525ba6a6cbef9320044f574aaacf5e2745847d478,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345669560666,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9s5b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 282bb2c3-c085-428b-a02e-2aaee97ed20d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf12f88315039187d9482421f554ffc8eb5981572da4ebc01e467795eebe25a9,PodSandboxId:e0804742e1ee9cae980137b857fc251fc4fd37254fa432072f812d77765a3c32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345530441253,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-58wtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8292b8a3-dcf6-4065-b7f5-041750832ba4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a5fca7803dd62fa2f7f2df18a0ee9b8f2218a866155b46279836c3652475108,PodSandboxId:ba714e7093dc7289e90442af754520b667b7b52c5781822efa334b3fe9aa1a58,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s
-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744640318950470336,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-qdw2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b211e56e-a21f-44c2-aec9-b3b5ab7e7fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd182887a5f5ff7524415abf29906d6b95d25742282bf20ba9d00da0c74e8bc3,PodSandboxId:66d6aaa0024388868fae7339eb5c9e8cfe652c15c8a91a76bef579eabfe44daa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Im
age:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744640304306369620,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c6d3a93-bcca-4c14-acc3-d9cd431d49ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27cae2375fcf217e6bbbe07b5974fedad2c8af0f6a4c2dbf77c675ec05b5180,PodSandbo
xId:93e3b8d30ac23946a54f287a21def7c768467895412f4e0f1ea8ac69b36992cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744640293763434593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7362fdb3-8bb6-441f-bb78-74251cb5cba0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3443e5244f4975c4965b900a4c5ec1b42bcc3bbed3a8468957f964fc6ef4562c,PodSandboxId:8c61c40b
f123ca0376c24de3095c7c8d26addee0b40630e278d0cee99db09f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744640290110297020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-64jmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f279f811-2746-47ea-ba40-130ef9246a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4b293b686762230de7fcc66d7f5b833274f44a9772600237e06655a7280dfe,PodSandboxId:f881b1858d7483fa17df3bf46f69ad5e0982e4ebdc761353d6d59b7eb48afc8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744640286344644550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f34f39-3532-4e8e-93a0-bb1ec6904704,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144f438b98994e85d0b73a977a1016923f80136a7232241de4de925613203e8,PodSandboxId:aaad6badd19abedd97cf343ebb85e92ef455760e65434f2a0b5e9e4ab966a83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744640275960701742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d088b04de0ba5eb9c4ecf95db240c8e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c168a238a3cb15e1e4c35cd94d933d52adc9229f39a63353900dbe8d182f2d0,PodSandboxId:cff8dac8ea4ed29696839507236428f76359dc6eb6689ba4b598c68473f9f107,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744640275949099126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968a42effdc5dec0bb05c7489c66b83b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:81d34a3b0b695dc6948dea9e540b3f8b3eff99d067c39438e9da10cbbf59ceae,PodSandboxId:5cbf1a75b1006ffcf0b6f910111bb67451696f054756fc3d28d95ca24288dbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744640275837450006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5839073fbf7f89bcaa0c209ddea79eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408d90
3b3bc46856b138efd4ebf27a1ec3caae2acb63258fd29def7525d6f2b6,PodSandboxId:38c59c2b94636d2ad9739963dec61d46db2ca938f3d56baf31b0e47c0cdc38d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744640275871461706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17cd1bed04896d699e14530b8c8a79b,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=4c2078a4-5817-4f81-92b9-063a4af3a1ce name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.999667555Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1bec2c71-1b13-42d5-9377-babb99e90d31 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:37 addons-885191 crio[667]: time="2025-04-14 14:22:37.999741895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bec2c71-1b13-42d5-9377-babb99e90d31 name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.001033269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adb1b79d-2bd9-40bd-abdb-814c9dc36b6d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.002208051Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640558002174929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604413,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adb1b79d-2bd9-40bd-abdb-814c9dc36b6d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.003474513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6777ea7f-a0a8-48c6-8f10-9afe5f0e15d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.003578075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6777ea7f-a0a8-48c6-8f10-9afe5f0e15d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.004007101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5ce83cdd4c652742b5593d5c7beab96cd1a29846f593e940d46f7a0cae25919,PodSandboxId:58c922a8849c7b09e9a7ae1eb007fe3acb3b9b116af43926de5f7bdaf73f1758,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744640557813877012,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-bp4l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac4bda66-8184-4f99-b2cb-a9af21219fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6716c7646ee72c4a6491d350fdc7aa47712462aa913b0cc3bf69f2e72ef482db,PodSandboxId:e9fba8d6c4e0ea827d82a8d3f4acbc2048e6a54e9b71152599fbe1f834d5a68f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744640418407693904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48d650a7-bbfb-493e-87b8-da6b06272724,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb355482d7e2d399add86353e7c885cc77463315b1c98e28abfcb1c9fe677595,PodSandboxId:dc3dfa266574b673619a6e0d504ce8b4156802a9f2dcbafbad7b304ec7eaf430,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744640374264840027,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f8d70-e12e-4fee-9a
e6-742fa4df29ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47d2934d07d5908d2aaa8d465d889147fb1b148cbd68656c06ef4131addc0d2,PodSandboxId:7c5461165fbf6d8b67143da4d643cee4c3605a179395184cf76fcd8d2626d1bf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744640366305433775,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-lr2hs,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: df4adb6d-7f9e-4948-819e-9f42b6218728,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f43a5be0747cdd754ccc586dd8d375a1d373d77e261f10e4fc55f1b61b763d4,PodSandboxId:93b8d97a363bfbb79f58bed525ba6a6cbef9320044f574aaacf5e2745847d478,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345669560666,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9s5b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 282bb2c3-c085-428b-a02e-2aaee97ed20d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf12f88315039187d9482421f554ffc8eb5981572da4ebc01e467795eebe25a9,PodSandboxId:e0804742e1ee9cae980137b857fc251fc4fd37254fa432072f812d77765a3c32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345530441253,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-58wtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8292b8a3-dcf6-4065-b7f5-041750832ba4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a5fca7803dd62fa2f7f2df18a0ee9b8f2218a866155b46279836c3652475108,PodSandboxId:ba714e7093dc7289e90442af754520b667b7b52c5781822efa334b3fe9aa1a58,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s
-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744640318950470336,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-qdw2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b211e56e-a21f-44c2-aec9-b3b5ab7e7fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd182887a5f5ff7524415abf29906d6b95d25742282bf20ba9d00da0c74e8bc3,PodSandboxId:66d6aaa0024388868fae7339eb5c9e8cfe652c15c8a91a76bef579eabfe44daa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Im
age:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744640304306369620,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c6d3a93-bcca-4c14-acc3-d9cd431d49ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27cae2375fcf217e6bbbe07b5974fedad2c8af0f6a4c2dbf77c675ec05b5180,PodSandbo
xId:93e3b8d30ac23946a54f287a21def7c768467895412f4e0f1ea8ac69b36992cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744640293763434593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7362fdb3-8bb6-441f-bb78-74251cb5cba0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3443e5244f4975c4965b900a4c5ec1b42bcc3bbed3a8468957f964fc6ef4562c,PodSandboxId:8c61c40b
f123ca0376c24de3095c7c8d26addee0b40630e278d0cee99db09f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744640290110297020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-64jmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f279f811-2746-47ea-ba40-130ef9246a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4b293b686762230de7fcc66d7f5b833274f44a9772600237e06655a7280dfe,PodSandboxId:f881b1858d7483fa17df3bf46f69ad5e0982e4ebdc761353d6d59b7eb48afc8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744640286344644550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f34f39-3532-4e8e-93a0-bb1ec6904704,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144f438b98994e85d0b73a977a1016923f80136a7232241de4de925613203e8,PodSandboxId:aaad6badd19abedd97cf343ebb85e92ef455760e65434f2a0b5e9e4ab966a83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744640275960701742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d088b04de0ba5eb9c4ecf95db240c8e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c168a238a3cb15e1e4c35cd94d933d52adc9229f39a63353900dbe8d182f2d0,PodSandboxId:cff8dac8ea4ed29696839507236428f76359dc6eb6689ba4b598c68473f9f107,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744640275949099126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968a42effdc5dec0bb05c7489c66b83b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:81d34a3b0b695dc6948dea9e540b3f8b3eff99d067c39438e9da10cbbf59ceae,PodSandboxId:5cbf1a75b1006ffcf0b6f910111bb67451696f054756fc3d28d95ca24288dbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744640275837450006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5839073fbf7f89bcaa0c209ddea79eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408d90
3b3bc46856b138efd4ebf27a1ec3caae2acb63258fd29def7525d6f2b6,PodSandboxId:38c59c2b94636d2ad9739963dec61d46db2ca938f3d56baf31b0e47c0cdc38d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744640275871461706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17cd1bed04896d699e14530b8c8a79b,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=6777ea7f-a0a8-48c6-8f10-9afe5f0e15d8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.039151475Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=fa492bc5-2e8c-46a0-a014-228415e52ccd name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.039225302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa492bc5-2e8c-46a0-a014-228415e52ccd name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.043585481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=209f1fb1-d201-446f-9de6-c92fd2218f4d name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.043655979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=209f1fb1-d201-446f-9de6-c92fd2218f4d name=/runtime.v1.RuntimeService/Version
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.045166259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10b38b8a-07d2-4c7b-a650-ac1f603526b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.046389475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640558046360703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604413,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10b38b8a-07d2-4c7b-a650-ac1f603526b6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.047580262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd5f9443-3f86-4bbb-bac0-ed057fb2b798 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.047638615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd5f9443-3f86-4bbb-bac0-ed057fb2b798 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 14:22:38 addons-885191 crio[667]: time="2025-04-14 14:22:38.048034715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5ce83cdd4c652742b5593d5c7beab96cd1a29846f593e940d46f7a0cae25919,PodSandboxId:58c922a8849c7b09e9a7ae1eb007fe3acb3b9b116af43926de5f7bdaf73f1758,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744640557813877012,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-bp4l4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ac4bda66-8184-4f99-b2cb-a9af21219fcc,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6716c7646ee72c4a6491d350fdc7aa47712462aa913b0cc3bf69f2e72ef482db,PodSandboxId:e9fba8d6c4e0ea827d82a8d3f4acbc2048e6a54e9b71152599fbe1f834d5a68f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744640418407693904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48d650a7-bbfb-493e-87b8-da6b06272724,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb355482d7e2d399add86353e7c885cc77463315b1c98e28abfcb1c9fe677595,PodSandboxId:dc3dfa266574b673619a6e0d504ce8b4156802a9f2dcbafbad7b304ec7eaf430,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744640374264840027,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef6f8d70-e12e-4fee-9a
e6-742fa4df29ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f47d2934d07d5908d2aaa8d465d889147fb1b148cbd68656c06ef4131addc0d2,PodSandboxId:7c5461165fbf6d8b67143da4d643cee4c3605a179395184cf76fcd8d2626d1bf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744640366305433775,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-lr2hs,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: df4adb6d-7f9e-4948-819e-9f42b6218728,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:2f43a5be0747cdd754ccc586dd8d375a1d373d77e261f10e4fc55f1b61b763d4,PodSandboxId:93b8d97a363bfbb79f58bed525ba6a6cbef9320044f574aaacf5e2745847d478,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345669560666,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h9s5b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 282bb2c3-c085-428b-a02e-2aaee97ed20d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf12f88315039187d9482421f554ffc8eb5981572da4ebc01e467795eebe25a9,PodSandboxId:e0804742e1ee9cae980137b857fc251fc4fd37254fa432072f812d77765a3c32,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744640345530441253,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-58wtg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8292b8a3-dcf6-4065-b7f5-041750832ba4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a5fca7803dd62fa2f7f2df18a0ee9b8f2218a866155b46279836c3652475108,PodSandboxId:ba714e7093dc7289e90442af754520b667b7b52c5781822efa334b3fe9aa1a58,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s
-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744640318950470336,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-qdw2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b211e56e-a21f-44c2-aec9-b3b5ab7e7fc7,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd182887a5f5ff7524415abf29906d6b95d25742282bf20ba9d00da0c74e8bc3,PodSandboxId:66d6aaa0024388868fae7339eb5c9e8cfe652c15c8a91a76bef579eabfe44daa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Im
age:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744640304306369620,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c6d3a93-bcca-4c14-acc3-d9cd431d49ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27cae2375fcf217e6bbbe07b5974fedad2c8af0f6a4c2dbf77c675ec05b5180,PodSandbo
xId:93e3b8d30ac23946a54f287a21def7c768467895412f4e0f1ea8ac69b36992cf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744640293763434593,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7362fdb3-8bb6-441f-bb78-74251cb5cba0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3443e5244f4975c4965b900a4c5ec1b42bcc3bbed3a8468957f964fc6ef4562c,PodSandboxId:8c61c40b
f123ca0376c24de3095c7c8d26addee0b40630e278d0cee99db09f17,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744640290110297020,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-64jmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f279f811-2746-47ea-ba40-130ef9246a7e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4b293b686762230de7fcc66d7f5b833274f44a9772600237e06655a7280dfe,PodSandboxId:f881b1858d7483fa17df3bf46f69ad5e0982e4ebdc761353d6d59b7eb48afc8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744640286344644550,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkkw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f34f39-3532-4e8e-93a0-bb1ec6904704,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1144f438b98994e85d0b73a977a1016923f80136a7232241de4de925613203e8,PodSandboxId:aaad6badd19abedd97cf343ebb85e92ef455760e65434f2a0b5e9e4ab966a83a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744640275960701742,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d088b04de0ba5eb9c4ecf95db240c8e,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c168a238a3cb15e1e4c35cd94d933d52adc9229f39a63353900dbe8d182f2d0,PodSandboxId:cff8dac8ea4ed29696839507236428f76359dc6eb6689ba4b598c68473f9f107,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744640275949099126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 968a42effdc5dec0bb05c7489c66b83b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:81d34a3b0b695dc6948dea9e540b3f8b3eff99d067c39438e9da10cbbf59ceae,PodSandboxId:5cbf1a75b1006ffcf0b6f910111bb67451696f054756fc3d28d95ca24288dbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744640275837450006,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5839073fbf7f89bcaa0c209ddea79eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408d90
3b3bc46856b138efd4ebf27a1ec3caae2acb63258fd29def7525d6f2b6,PodSandboxId:38c59c2b94636d2ad9739963dec61d46db2ca938f3d56baf31b0e47c0cdc38d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744640275871461706,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-885191,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e17cd1bed04896d699e14530b8c8a79b,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file=
"otel-collector/interceptors.go:74" id=fd5f9443-3f86-4bbb-bac0-ed057fb2b798 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	b5ce83cdd4c65       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   58c922a8849c7       hello-world-app-7d9564db4-bp4l4
	6716c7646ee72       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   e9fba8d6c4e0e       nginx
	eb355482d7e2d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   dc3dfa266574b       busybox
	f47d2934d07d5       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   7c5461165fbf6       ingress-nginx-controller-56d7c84fd4-lr2hs
	2f43a5be0747c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              patch                     0                   93b8d97a363bf       ingress-nginx-admission-patch-h9s5b
	bf12f88315039       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   e0804742e1ee9       ingress-nginx-admission-create-58wtg
	4a5fca7803dd6       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago            Running             amd-gpu-device-plugin     0                   ba714e7093dc7       amd-gpu-device-plugin-qdw2j
	cd182887a5f5f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   66d6aaa002438       kube-ingress-dns-minikube
	f27cae2375fcf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   93e3b8d30ac23       storage-provisioner
	3443e5244f497       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   8c61c40bf123c       coredns-668d6bf9bc-64jmc
	3b4b293b68676       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago            Running             kube-proxy                0                   f881b1858d748       kube-proxy-rzkkw
	1144f438b9899       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago            Running             kube-scheduler            0                   aaad6badd19ab       kube-scheduler-addons-885191
	5c168a238a3cb       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago            Running             etcd                      0                   cff8dac8ea4ed       etcd-addons-885191
	408d903b3bc46       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago            Running             kube-controller-manager   0                   38c59c2b94636       kube-controller-manager-addons-885191
	81d34a3b0b695       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago            Running             kube-apiserver            0                   5cbf1a75b1006       kube-apiserver-addons-885191
	
	
	==> coredns [3443e5244f4975c4965b900a4c5ec1b42bcc3bbed3a8468957f964fc6ef4562c] <==
	[INFO] 10.244.0.8:60916 - 2085 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000206349s
	[INFO] 10.244.0.8:60916 - 53978 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106213s
	[INFO] 10.244.0.8:60916 - 34103 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079587s
	[INFO] 10.244.0.8:60916 - 30570 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071232s
	[INFO] 10.244.0.8:60916 - 45614 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000098147s
	[INFO] 10.244.0.8:60916 - 35954 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114501s
	[INFO] 10.244.0.8:60916 - 50376 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000093544s
	[INFO] 10.244.0.8:60177 - 16595 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000155121s
	[INFO] 10.244.0.8:60177 - 16309 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074618s
	[INFO] 10.244.0.8:36489 - 7755 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107522s
	[INFO] 10.244.0.8:36489 - 7968 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000279811s
	[INFO] 10.244.0.8:37301 - 52516 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000097191s
	[INFO] 10.244.0.8:37301 - 52281 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111745s
	[INFO] 10.244.0.8:46253 - 42173 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011632s
	[INFO] 10.244.0.8:46253 - 41756 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148383s
	[INFO] 10.244.0.23:41551 - 40457 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000498145s
	[INFO] 10.244.0.23:46936 - 56027 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000222993s
	[INFO] 10.244.0.23:59686 - 35851 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122063s
	[INFO] 10.244.0.23:37506 - 37229 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000182257s
	[INFO] 10.244.0.23:42613 - 8818 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075898s
	[INFO] 10.244.0.23:55034 - 47794 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082986s
	[INFO] 10.244.0.23:49276 - 55005 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00475466s
	[INFO] 10.244.0.23:37720 - 61023 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.005258313s
	[INFO] 10.244.0.26:54707 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000274388s
	[INFO] 10.244.0.26:46480 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100324s
	
	
	==> describe nodes <==
	Name:               addons-885191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-885191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=addons-885191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T14_18_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885191
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 14:17:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885191
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 14:22:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 14:20:34 +0000   Mon, 14 Apr 2025 14:17:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 14:20:34 +0000   Mon, 14 Apr 2025 14:17:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 14:20:34 +0000   Mon, 14 Apr 2025 14:17:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 14:20:34 +0000   Mon, 14 Apr 2025 14:18:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    addons-885191
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 afe022985eed4fd0a694e6ed5fa3b6c6
	  System UUID:                afe02298-5eed-4fd0-a694-e6ed5fa3b6c6
	  Boot ID:                    51f24b7e-21ee-4a36-bf89-80fba700eab6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-world-app-7d9564db4-bp4l4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-lr2hs    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m24s
	  kube-system                 amd-gpu-device-plugin-qdw2j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 coredns-668d6bf9bc-64jmc                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m33s
	  kube-system                 etcd-addons-885191                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m38s
	  kube-system                 kube-apiserver-addons-885191                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-controller-manager-addons-885191        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-rzkkw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-scheduler-addons-885191                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node addons-885191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node addons-885191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node addons-885191 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m37s                  kubelet          Node addons-885191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s                  kubelet          Node addons-885191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s                  kubelet          Node addons-885191 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m36s                  kubelet          Node addons-885191 status is now: NodeReady
	  Normal  RegisteredNode           4m34s                  node-controller  Node addons-885191 event: Registered Node addons-885191 in Controller
	  Normal  CIDRAssignmentFailed     4m34s                  cidrAllocator    Node addons-885191 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[Apr14 14:18] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.076550] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.169981] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.217689] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +4.839281] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.040706] kauditd_printk_skb: 119 callbacks suppressed
	[  +8.146342] kauditd_printk_skb: 110 callbacks suppressed
	[ +22.179629] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.664026] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.144957] kauditd_printk_skb: 27 callbacks suppressed
	[Apr14 14:19] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.108915] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.931756] kauditd_printk_skb: 41 callbacks suppressed
	[  +8.182398] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.540570] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.007713] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.207181] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.128150] kauditd_printk_skb: 6 callbacks suppressed
	[Apr14 14:20] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.036473] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.085530] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.157620] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.083421] kauditd_printk_skb: 47 callbacks suppressed
	[ +23.979757] kauditd_printk_skb: 7 callbacks suppressed
	[Apr14 14:22] kauditd_printk_skb: 57 callbacks suppressed
	
	
	==> etcd [5c168a238a3cb15e1e4c35cd94d933d52adc9229f39a63353900dbe8d182f2d0] <==
	{"level":"info","ts":"2025-04-14T14:19:54.571513Z","caller":"traceutil/trace.go:171","msg":"trace[403640502] linearizableReadLoop","detail":"{readStateIndex:1357; appliedIndex:1356; }","duration":"235.686358ms","start":"2025-04-14T14:19:54.335810Z","end":"2025-04-14T14:19:54.571497Z","steps":["trace[403640502] 'read index received'  (duration: 235.53283ms)","trace[403640502] 'applied index is now lower than readState.Index'  (duration: 152.968µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T14:19:54.571829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.976097ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:19:54.571894Z","caller":"traceutil/trace.go:171","msg":"trace[999041998] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1315; }","duration":"236.081966ms","start":"2025-04-14T14:19:54.335804Z","end":"2025-04-14T14:19:54.571886Z","steps":["trace[999041998] 'agreement among raft nodes before linearized reading'  (duration: 235.966223ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:19:54.571994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.855477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:19:54.572034Z","caller":"traceutil/trace.go:171","msg":"trace[689619764] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1315; }","duration":"180.905798ms","start":"2025-04-14T14:19:54.391120Z","end":"2025-04-14T14:19:54.572025Z","steps":["trace[689619764] 'agreement among raft nodes before linearized reading'  (duration: 180.745163ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:19:54.573376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.724811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:19:54.573621Z","caller":"traceutil/trace.go:171","msg":"trace[682614472] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1315; }","duration":"183.060106ms","start":"2025-04-14T14:19:54.390547Z","end":"2025-04-14T14:19:54.573607Z","steps":["trace[682614472] 'agreement among raft nodes before linearized reading'  (duration: 182.726381ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:19:54.574115Z","caller":"traceutil/trace.go:171","msg":"trace[1490382232] transaction","detail":"{read_only:false; response_revision:1315; number_of_response:1; }","duration":"381.091111ms","start":"2025-04-14T14:19:54.193013Z","end":"2025-04-14T14:19:54.574104Z","steps":["trace[1490382232] 'process raft request'  (duration: 378.369171ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:19:54.571831Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.473548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-04-14T14:19:54.575545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T14:19:54.192999Z","time spent":"382.475791ms","remote":"127.0.0.1:59924","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":817,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/amd-gpu-device-plugin-qdw2j.183635197025994c\" mod_revision:925 > success:<request_put:<key:\"/registry/events/kube-system/amd-gpu-device-plugin-qdw2j.183635197025994c\" value_size:726 lease:6421735271610930228 >> failure:<request_range:<key:\"/registry/events/kube-system/amd-gpu-device-plugin-qdw2j.183635197025994c\" > >"}
	{"level":"info","ts":"2025-04-14T14:19:54.575718Z","caller":"traceutil/trace.go:171","msg":"trace[847509118] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1315; }","duration":"192.143022ms","start":"2025-04-14T14:19:54.383284Z","end":"2025-04-14T14:19:54.575427Z","steps":["trace[847509118] 'agreement among raft nodes before linearized reading'  (duration: 188.427743ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:05.507417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.781289ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:20:05.507475Z","caller":"traceutil/trace.go:171","msg":"trace[1189220952] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1410; }","duration":"172.851727ms","start":"2025-04-14T14:20:05.334611Z","end":"2025-04-14T14:20:05.507463Z","steps":["trace[1189220952] 'range keys from in-memory index tree'  (duration: 172.768139ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:05.507548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"288.550111ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:20:05.507560Z","caller":"traceutil/trace.go:171","msg":"trace[314340178] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1410; }","duration":"288.586388ms","start":"2025-04-14T14:20:05.218969Z","end":"2025-04-14T14:20:05.507555Z","steps":["trace[314340178] 'count revisions from in-memory index tree'  (duration: 288.413601ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:05.507809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.175141ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-04-14T14:20:05.507862Z","caller":"traceutil/trace.go:171","msg":"trace[1632257959] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1410; }","duration":"239.246908ms","start":"2025-04-14T14:20:05.268606Z","end":"2025-04-14T14:20:05.507853Z","steps":["trace[1632257959] 'range keys from in-memory index tree'  (duration: 239.051714ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T14:20:07.744351Z","caller":"traceutil/trace.go:171","msg":"trace[1608860957] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1464; }","duration":"136.745549ms","start":"2025-04-14T14:20:07.607589Z","end":"2025-04-14T14:20:07.744335Z","steps":["trace[1608860957] 'read index received'  (duration: 136.588285ms)","trace[1608860957] 'applied index is now lower than readState.Index'  (duration: 156.835µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-14T14:20:07.744418Z","caller":"traceutil/trace.go:171","msg":"trace[778102496] transaction","detail":"{read_only:false; response_revision:1417; number_of_response:1; }","duration":"219.41876ms","start":"2025-04-14T14:20:07.524994Z","end":"2025-04-14T14:20:07.744413Z","steps":["trace[778102496] 'process raft request'  (duration: 219.217575ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:07.744535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.931746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-14T14:20:07.744550Z","caller":"traceutil/trace.go:171","msg":"trace[1790440264] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1417; }","duration":"136.981804ms","start":"2025-04-14T14:20:07.607564Z","end":"2025-04-14T14:20:07.744546Z","steps":["trace[1790440264] 'agreement among raft nodes before linearized reading'  (duration: 136.92861ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:07.744699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.730818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-04-14T14:20:07.744731Z","caller":"traceutil/trace.go:171","msg":"trace[647986219] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1417; }","duration":"136.830649ms","start":"2025-04-14T14:20:07.607893Z","end":"2025-04-14T14:20:07.744724Z","steps":["trace[647986219] 'agreement among raft nodes before linearized reading'  (duration: 136.739734ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T14:20:09.021473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.361114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/headlamp/headlamp\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T14:20:09.021532Z","caller":"traceutil/trace.go:171","msg":"trace[1437779139] range","detail":"{range_begin:/registry/serviceaccounts/headlamp/headlamp; range_end:; response_count:0; response_revision:1451; }","duration":"108.460248ms","start":"2025-04-14T14:20:08.913059Z","end":"2025-04-14T14:20:09.021519Z","steps":["trace[1437779139] 'range keys from in-memory index tree'  (duration: 108.272794ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:22:38 up 5 min,  0 users,  load average: 0.51, 1.22, 0.63
	Linux addons-885191 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [81d34a3b0b695dc6948dea9e540b3f8b3eff99d067c39438e9da10cbbf59ceae] <==
	I0414 14:18:50.227389       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 14:19:40.599206       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:48548: use of closed network connection
	E0414 14:19:40.792726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:48578: use of closed network connection
	I0414 14:19:50.291405       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.143.179"}
	I0414 14:20:15.408278       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 14:20:15.605790       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.191.142"}
	I0414 14:20:15.789348       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 14:20:21.616712       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0414 14:20:22.654138       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0414 14:20:35.092431       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0414 14:20:47.552586       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 14:20:47.552629       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 14:20:47.594556       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 14:20:47.594618       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 14:20:47.616633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 14:20:47.617519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 14:20:47.697414       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 14:20:47.697469       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 14:20:47.754793       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 14:20:47.754849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 14:20:48.697897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 14:20:48.754842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 14:20:48.760499       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0414 14:20:51.119760       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 14:22:36.664989       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.173.57"}
	
	
	==> kube-controller-manager [408d903b3bc46856b138efd4ebf27a1ec3caae2acb63258fd29def7525d6f2b6] <==
	W0414 14:21:38.579609       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 14:21:38.580533       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 14:21:38.581409       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 14:21:38.581436       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 14:22:06.559599       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 14:22:06.560599       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 14:22:06.561534       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 14:22:06.561568       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 14:22:10.579584       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 14:22:10.580950       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 14:22:10.581987       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 14:22:10.582052       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 14:22:14.426502       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 14:22:14.427562       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 14:22:14.428530       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 14:22:14.428598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 14:22:36.235456       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 14:22:36.236635       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 14:22:36.237519       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 14:22:36.237592       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 14:22:36.489184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="45.621724ms"
	I0414 14:22:36.519368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="30.121033ms"
	I0414 14:22:36.519455       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="40.274µs"
	I0414 14:22:38.274014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.259982ms"
	I0414 14:22:38.274098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="39.707µs"
	
	
	==> kube-proxy [3b4b293b686762230de7fcc66d7f5b833274f44a9772600237e06655a7280dfe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0414 14:18:06.887204       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0414 14:18:06.906808       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0414 14:18:06.906883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 14:18:07.025995       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0414 14:18:07.026076       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0414 14:18:07.026103       1 server_linux.go:170] "Using iptables Proxier"
	I0414 14:18:07.030888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 14:18:07.031217       1 server.go:497] "Version info" version="v1.32.2"
	I0414 14:18:07.031233       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 14:18:07.033046       1 config.go:199] "Starting service config controller"
	I0414 14:18:07.033073       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 14:18:07.033105       1 config.go:105] "Starting endpoint slice config controller"
	I0414 14:18:07.033109       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 14:18:07.033795       1 config.go:329] "Starting node config controller"
	I0414 14:18:07.033802       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 14:18:07.134066       1 shared_informer.go:320] Caches are synced for node config
	I0414 14:18:07.134095       1 shared_informer.go:320] Caches are synced for service config
	I0414 14:18:07.134104       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1144f438b98994e85d0b73a977a1016923f80136a7232241de4de925613203e8] <==
	W0414 14:17:58.364502       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 14:17:58.364592       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.221211       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0414 14:17:59.221269       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.287997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0414 14:17:59.288049       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.308734       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0414 14:17:59.308799       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.402463       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0414 14:17:59.402515       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.445791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 14:17:59.445843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.466546       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 14:17:59.466606       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.507900       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 14:17:59.508033       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 14:17:59.550047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0414 14:17:59.550180       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.593045       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0414 14:17:59.593115       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.603636       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 14:17:59.603696       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 14:17:59.638973       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 14:17:59.639377       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0414 14:18:01.940027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 14:22:01 addons-885191 kubelet[1231]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 14 14:22:01 addons-885191 kubelet[1231]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 14 14:22:01 addons-885191 kubelet[1231]: E0414 14:22:01.347393    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640521346978003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:01 addons-885191 kubelet[1231]: E0414 14:22:01.347420    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640521346978003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:03 addons-885191 kubelet[1231]: I0414 14:22:03.191547    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-qdw2j" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 14:22:11 addons-885191 kubelet[1231]: I0414 14:22:11.189440    1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 14:22:11 addons-885191 kubelet[1231]: E0414 14:22:11.350558    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640531349796445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:11 addons-885191 kubelet[1231]: E0414 14:22:11.350604    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640531349796445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:21 addons-885191 kubelet[1231]: E0414 14:22:21.353582    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640541353019881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:21 addons-885191 kubelet[1231]: E0414 14:22:21.353645    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640541353019881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:31 addons-885191 kubelet[1231]: E0414 14:22:31.356826    1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640551356345295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:31 addons-885191 kubelet[1231]: E0414 14:22:31.357305    1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744640551356345295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595807,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.481890    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="liveness-probe"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.481991    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="hostpath"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482000    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="e2e5ed2a-3606-4a75-a5c1-2105d0d3d2be" containerName="volume-snapshot-controller"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482006    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="7bfe0e81-480d-4a33-8c1b-1d77d47dfab1" containerName="csi-attacher"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482012    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="node-driver-registrar"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482017    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="26112d58-08bf-4e3f-aa2c-1a5e25d293f7" containerName="task-pv-container"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482023    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="79289428-4f28-435b-bbc2-bf8e07d446cb" containerName="volume-snapshot-controller"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482027    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="csi-external-health-monitor-controller"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482032    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="csi-snapshotter"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482063    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="fd632614-8bd8-4ad4-a983-45e2eda50d32" containerName="csi-provisioner"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482074    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="4f0aad39-2db3-4be0-8479-2821428be192" containerName="local-path-provisioner"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.482084    1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="ddfb077d-7366-4bc7-954b-96bf01257c95" containerName="csi-resizer"
	Apr 14 14:22:36 addons-885191 kubelet[1231]: I0414 14:22:36.565772    1231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5sx\" (UniqueName: \"kubernetes.io/projected/ac4bda66-8184-4f99-b2cb-a9af21219fcc-kube-api-access-fv5sx\") pod \"hello-world-app-7d9564db4-bp4l4\" (UID: \"ac4bda66-8184-4f99-b2cb-a9af21219fcc\") " pod="default/hello-world-app-7d9564db4-bp4l4"
	
	
	==> storage-provisioner [f27cae2375fcf217e6bbbe07b5974fedad2c8af0f6a4c2dbf77c675ec05b5180] <==
	I0414 14:18:14.812200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 14:18:15.187317       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 14:18:15.187377       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 14:18:15.266418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 14:18:15.266751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-885191_8cacd4be-6dec-4d34-8922-1b9e0ece02be!
	I0414 14:18:15.287624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36196cfd-7549-4742-8f78-4f52b9563f8e", APIVersion:"v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-885191_8cacd4be-6dec-4d34-8922-1b9e0ece02be became leader
	I0414 14:18:15.378234       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-885191_8cacd4be-6dec-4d34-8922-1b9e0ece02be!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-885191 -n addons-885191
helpers_test.go:261: (dbg) Run:  kubectl --context addons-885191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-58wtg ingress-nginx-admission-patch-h9s5b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-885191 describe pod ingress-nginx-admission-create-58wtg ingress-nginx-admission-patch-h9s5b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-885191 describe pod ingress-nginx-admission-create-58wtg ingress-nginx-admission-patch-h9s5b: exit status 1 (62.13898ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-58wtg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h9s5b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-885191 describe pod ingress-nginx-admission-create-58wtg ingress-nginx-admission-patch-h9s5b: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable ingress-dns --alsologtostderr -v=1: (1.055124775s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable ingress --alsologtostderr -v=1: (7.733737081s)
--- FAIL: TestAddons/parallel/Ingress (152.98s)

                                                
                                    
x
+
TestPreload (203.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-191380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0414 15:18:18.359241 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-191380 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.775185464s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-191380 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-191380 image pull gcr.io/k8s-minikube/busybox: (2.555479782s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-191380
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-191380: (7.307661937s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-191380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0414 15:19:31.482528 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-191380 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.076490345s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-191380 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:631: *** TestPreload FAILED at 2025-04-14 15:20:09.991053884 +0000 UTC m=+3787.983812386
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-191380 -n test-preload-191380
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-191380 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-191380 logs -n 25: (1.117784125s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-981731 ssh -n                                                                 | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	|         | multinode-981731-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-981731 ssh -n multinode-981731 sudo cat                                       | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	|         | /home/docker/cp-test_multinode-981731-m03_multinode-981731.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-981731 cp multinode-981731-m03:/home/docker/cp-test.txt                       | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	|         | multinode-981731-m02:/home/docker/cp-test_multinode-981731-m03_multinode-981731-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-981731 ssh -n                                                                 | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	|         | multinode-981731-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-981731 ssh -n multinode-981731-m02 sudo cat                                   | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	|         | /home/docker/cp-test_multinode-981731-m03_multinode-981731-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-981731 node stop m03                                                          | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:04 UTC |
	| node    | multinode-981731 node start                                                             | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:04 UTC | 14 Apr 25 15:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-981731                                                                | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:05 UTC |                     |
	| stop    | -p multinode-981731                                                                     | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:05 UTC | 14 Apr 25 15:08 UTC |
	| start   | -p multinode-981731                                                                     | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:08 UTC | 14 Apr 25 15:11 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-981731                                                                | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:11 UTC |                     |
	| node    | multinode-981731 node delete                                                            | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:11 UTC | 14 Apr 25 15:11 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-981731 stop                                                                   | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:11 UTC | 14 Apr 25 15:14 UTC |
	| start   | -p multinode-981731                                                                     | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:14 UTC | 14 Apr 25 15:16 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-981731                                                                | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC |                     |
	| start   | -p multinode-981731-m02                                                                 | multinode-981731-m02 | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-981731-m03                                                                 | multinode-981731-m03 | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC | 14 Apr 25 15:16 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-981731                                                                 | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC |                     |
	| delete  | -p multinode-981731-m03                                                                 | multinode-981731-m03 | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC | 14 Apr 25 15:16 UTC |
	| delete  | -p multinode-981731                                                                     | multinode-981731     | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC | 14 Apr 25 15:16 UTC |
	| start   | -p test-preload-191380                                                                  | test-preload-191380  | jenkins | v1.35.0 | 14 Apr 25 15:16 UTC | 14 Apr 25 15:18 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-191380 image pull                                                          | test-preload-191380  | jenkins | v1.35.0 | 14 Apr 25 15:18 UTC | 14 Apr 25 15:19 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-191380                                                                  | test-preload-191380  | jenkins | v1.35.0 | 14 Apr 25 15:19 UTC | 14 Apr 25 15:19 UTC |
	| start   | -p test-preload-191380                                                                  | test-preload-191380  | jenkins | v1.35.0 | 14 Apr 25 15:19 UTC | 14 Apr 25 15:20 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-191380 image list                                                          | test-preload-191380  | jenkins | v1.35.0 | 14 Apr 25 15:20 UTC | 14 Apr 25 15:20 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 15:19:08
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 15:19:08.729353 1885939 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:19:08.729663 1885939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:19:08.729676 1885939 out.go:358] Setting ErrFile to fd 2...
	I0414 15:19:08.729683 1885939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:19:08.729925 1885939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:19:08.730551 1885939 out.go:352] Setting JSON to false
	I0414 15:19:08.731662 1885939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":39693,"bootTime":1744604256,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:19:08.731781 1885939 start.go:139] virtualization: kvm guest
	I0414 15:19:08.733938 1885939 out.go:177] * [test-preload-191380] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:19:08.735299 1885939 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:19:08.735334 1885939 notify.go:220] Checking for updates...
	I0414 15:19:08.737869 1885939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:19:08.739121 1885939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:19:08.740250 1885939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:19:08.741418 1885939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:19:08.742555 1885939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:19:08.744261 1885939 config.go:182] Loaded profile config "test-preload-191380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 15:19:08.744742 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:08.744820 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:08.760709 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39641
	I0414 15:19:08.761301 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:08.761877 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:08.761907 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:08.762239 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:08.762452 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:08.764424 1885939 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 15:19:08.765991 1885939 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:19:08.766390 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:08.766450 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:08.782568 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0414 15:19:08.783021 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:08.783507 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:08.783535 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:08.783905 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:08.784100 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:08.821771 1885939 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 15:19:08.823148 1885939 start.go:297] selected driver: kvm2
	I0414 15:19:08.823163 1885939 start.go:901] validating driver "kvm2" against &{Name:test-preload-191380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-191380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:19:08.823289 1885939 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:19:08.823986 1885939 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:19:08.824061 1885939 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:19:08.840681 1885939 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:19:08.841119 1885939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:19:08.841167 1885939 cni.go:84] Creating CNI manager for ""
	I0414 15:19:08.841210 1885939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:19:08.841264 1885939 start.go:340] cluster config:
	{Name:test-preload-191380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-191380 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:19:08.841374 1885939 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:19:08.843723 1885939 out.go:177] * Starting "test-preload-191380" primary control-plane node in "test-preload-191380" cluster
	I0414 15:19:08.844810 1885939 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 15:19:08.871100 1885939 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 15:19:08.871144 1885939 cache.go:56] Caching tarball of preloaded images
	I0414 15:19:08.871323 1885939 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 15:19:08.873007 1885939 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0414 15:19:08.874194 1885939 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 15:19:08.900702 1885939 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0414 15:19:12.222362 1885939 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 15:19:12.222517 1885939 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0414 15:19:13.106874 1885939 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0414 15:19:13.107055 1885939 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/config.json ...
	I0414 15:19:13.107338 1885939 start.go:360] acquireMachinesLock for test-preload-191380: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:19:13.107429 1885939 start.go:364] duration metric: took 63.816µs to acquireMachinesLock for "test-preload-191380"
	I0414 15:19:13.107452 1885939 start.go:96] Skipping create...Using existing machine configuration
	I0414 15:19:13.107460 1885939 fix.go:54] fixHost starting: 
	I0414 15:19:13.107776 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:13.107829 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:13.123298 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39499
	I0414 15:19:13.123876 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:13.124421 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:13.124445 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:13.124793 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:13.125010 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:13.125168 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetState
	I0414 15:19:13.127210 1885939 fix.go:112] recreateIfNeeded on test-preload-191380: state=Stopped err=<nil>
	I0414 15:19:13.127251 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	W0414 15:19:13.127403 1885939 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 15:19:13.129513 1885939 out.go:177] * Restarting existing kvm2 VM for "test-preload-191380" ...
	I0414 15:19:13.130967 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Start
	I0414 15:19:13.131192 1885939 main.go:141] libmachine: (test-preload-191380) starting domain...
	I0414 15:19:13.131211 1885939 main.go:141] libmachine: (test-preload-191380) ensuring networks are active...
	I0414 15:19:13.132089 1885939 main.go:141] libmachine: (test-preload-191380) Ensuring network default is active
	I0414 15:19:13.132473 1885939 main.go:141] libmachine: (test-preload-191380) Ensuring network mk-test-preload-191380 is active
	I0414 15:19:13.132884 1885939 main.go:141] libmachine: (test-preload-191380) getting domain XML...
	I0414 15:19:13.133599 1885939 main.go:141] libmachine: (test-preload-191380) creating domain...
	I0414 15:19:13.480625 1885939 main.go:141] libmachine: (test-preload-191380) waiting for IP...
	I0414 15:19:13.481554 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:13.481979 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:13.482084 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:13.481989 1885991 retry.go:31] will retry after 212.525751ms: waiting for domain to come up
	I0414 15:19:13.696646 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:13.697071 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:13.697105 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:13.697035 1885991 retry.go:31] will retry after 383.886873ms: waiting for domain to come up
	I0414 15:19:14.082910 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:14.083349 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:14.083410 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:14.083312 1885991 retry.go:31] will retry after 406.108196ms: waiting for domain to come up
	I0414 15:19:14.490824 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:14.491231 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:14.491291 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:14.491212 1885991 retry.go:31] will retry after 477.899644ms: waiting for domain to come up
	I0414 15:19:14.970980 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:14.971393 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:14.971425 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:14.971373 1885991 retry.go:31] will retry after 704.504506ms: waiting for domain to come up
	I0414 15:19:15.677266 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:15.677634 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:15.677668 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:15.677613 1885991 retry.go:31] will retry after 949.831799ms: waiting for domain to come up
	I0414 15:19:16.628780 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:16.629188 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:16.629214 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:16.629149 1885991 retry.go:31] will retry after 877.841457ms: waiting for domain to come up
	I0414 15:19:17.508230 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:17.508606 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:17.508648 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:17.508568 1885991 retry.go:31] will retry after 1.140386026s: waiting for domain to come up
	I0414 15:19:18.650190 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:18.650567 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:18.650589 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:18.650527 1885991 retry.go:31] will retry after 1.646045526s: waiting for domain to come up
	I0414 15:19:20.298475 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:20.298976 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:20.299010 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:20.298929 1885991 retry.go:31] will retry after 1.881817983s: waiting for domain to come up
	I0414 15:19:22.183131 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:22.183540 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:22.183592 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:22.183523 1885991 retry.go:31] will retry after 2.265856691s: waiting for domain to come up
	I0414 15:19:24.451377 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:24.451728 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:24.451754 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:24.451689 1885991 retry.go:31] will retry after 2.427062158s: waiting for domain to come up
	I0414 15:19:26.882380 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:26.882832 1885939 main.go:141] libmachine: (test-preload-191380) DBG | unable to find current IP address of domain test-preload-191380 in network mk-test-preload-191380
	I0414 15:19:26.882857 1885939 main.go:141] libmachine: (test-preload-191380) DBG | I0414 15:19:26.882794 1885991 retry.go:31] will retry after 3.485982486s: waiting for domain to come up
	I0414 15:19:30.371939 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.372345 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has current primary IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.372366 1885939 main.go:141] libmachine: (test-preload-191380) found domain IP: 192.168.39.135
	I0414 15:19:30.372401 1885939 main.go:141] libmachine: (test-preload-191380) reserving static IP address...
	I0414 15:19:30.372966 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "test-preload-191380", mac: "52:54:00:04:bd:bc", ip: "192.168.39.135"} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.372999 1885939 main.go:141] libmachine: (test-preload-191380) reserved static IP address 192.168.39.135 for domain test-preload-191380
	I0414 15:19:30.373017 1885939 main.go:141] libmachine: (test-preload-191380) DBG | skip adding static IP to network mk-test-preload-191380 - found existing host DHCP lease matching {name: "test-preload-191380", mac: "52:54:00:04:bd:bc", ip: "192.168.39.135"}
	I0414 15:19:30.373052 1885939 main.go:141] libmachine: (test-preload-191380) waiting for SSH...
	I0414 15:19:30.373077 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Getting to WaitForSSH function...
	I0414 15:19:30.375210 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.375737 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.375760 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.375883 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Using SSH client type: external
	I0414 15:19:30.375926 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa (-rw-------)
	I0414 15:19:30.375964 1885939 main.go:141] libmachine: (test-preload-191380) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:19:30.375981 1885939 main.go:141] libmachine: (test-preload-191380) DBG | About to run SSH command:
	I0414 15:19:30.375994 1885939 main.go:141] libmachine: (test-preload-191380) DBG | exit 0
	I0414 15:19:30.498920 1885939 main.go:141] libmachine: (test-preload-191380) DBG | SSH cmd err, output: <nil>: 
	I0414 15:19:30.499343 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetConfigRaw
	I0414 15:19:30.500065 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetIP
	I0414 15:19:30.502846 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.503204 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.503241 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.503482 1885939 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/config.json ...
	I0414 15:19:30.503705 1885939 machine.go:93] provisionDockerMachine start ...
	I0414 15:19:30.503725 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:30.503961 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:30.506416 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.506823 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.506850 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.507007 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:30.507208 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.507391 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.507544 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:30.507723 1885939 main.go:141] libmachine: Using SSH client type: native
	I0414 15:19:30.508139 1885939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0414 15:19:30.508157 1885939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 15:19:30.615336 1885939 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 15:19:30.615371 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetMachineName
	I0414 15:19:30.615643 1885939 buildroot.go:166] provisioning hostname "test-preload-191380"
	I0414 15:19:30.615676 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetMachineName
	I0414 15:19:30.615865 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:30.618720 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.619045 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.619077 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.619269 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:30.619428 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.619607 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.619719 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:30.619866 1885939 main.go:141] libmachine: Using SSH client type: native
	I0414 15:19:30.620109 1885939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0414 15:19:30.620124 1885939 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-191380 && echo "test-preload-191380" | sudo tee /etc/hostname
	I0414 15:19:30.741767 1885939 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-191380
	
	I0414 15:19:30.741800 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:30.744813 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.745239 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.745270 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.745498 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:30.745714 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.745901 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:30.746075 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:30.746203 1885939 main.go:141] libmachine: Using SSH client type: native
	I0414 15:19:30.746430 1885939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0414 15:19:30.746447 1885939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-191380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-191380/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-191380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:19:30.860266 1885939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:19:30.860306 1885939 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:19:30.860360 1885939 buildroot.go:174] setting up certificates
	I0414 15:19:30.860372 1885939 provision.go:84] configureAuth start
	I0414 15:19:30.860385 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetMachineName
	I0414 15:19:30.860805 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetIP
	I0414 15:19:30.863608 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.864004 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.864031 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.864246 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:30.866545 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.867100 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:30.867156 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:30.867303 1885939 provision.go:143] copyHostCerts
	I0414 15:19:30.867364 1885939 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:19:30.867386 1885939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:19:30.867453 1885939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:19:30.867586 1885939 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:19:30.867597 1885939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:19:30.867624 1885939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:19:30.867682 1885939 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:19:30.867686 1885939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:19:30.867705 1885939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:19:30.867750 1885939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.test-preload-191380 san=[127.0.0.1 192.168.39.135 localhost minikube test-preload-191380]
	I0414 15:19:31.283856 1885939 provision.go:177] copyRemoteCerts
	I0414 15:19:31.283923 1885939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:19:31.283950 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.286851 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.287208 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.287238 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.287404 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.287602 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.287827 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.288012 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:31.368659 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:19:31.395073 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0414 15:19:31.421481 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:19:31.447304 1885939 provision.go:87] duration metric: took 586.918125ms to configureAuth
	I0414 15:19:31.447337 1885939 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:19:31.447512 1885939 config.go:182] Loaded profile config "test-preload-191380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 15:19:31.447629 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.450650 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.450971 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.451008 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.451176 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.451384 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.451522 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.451706 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.451901 1885939 main.go:141] libmachine: Using SSH client type: native
	I0414 15:19:31.452123 1885939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0414 15:19:31.452144 1885939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:19:31.679506 1885939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:19:31.679536 1885939 machine.go:96] duration metric: took 1.175816159s to provisionDockerMachine
	I0414 15:19:31.679553 1885939 start.go:293] postStartSetup for "test-preload-191380" (driver="kvm2")
	I0414 15:19:31.679567 1885939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:19:31.679589 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:31.679963 1885939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:19:31.679992 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.682746 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.683114 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.683157 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.683294 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.683501 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.683653 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.683776 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:31.765634 1885939 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:19:31.770415 1885939 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:19:31.770454 1885939 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:19:31.770619 1885939 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:19:31.770738 1885939 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:19:31.770832 1885939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:19:31.781195 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:19:31.807561 1885939 start.go:296] duration metric: took 127.990344ms for postStartSetup
	I0414 15:19:31.807611 1885939 fix.go:56] duration metric: took 18.700152014s for fixHost
	I0414 15:19:31.807634 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.810148 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.810457 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.810483 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.810684 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.810948 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.811117 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.811287 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.811463 1885939 main.go:141] libmachine: Using SSH client type: native
	I0414 15:19:31.811673 1885939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0414 15:19:31.811683 1885939 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:19:31.915586 1885939 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744643971.885439504
	
	I0414 15:19:31.915624 1885939 fix.go:216] guest clock: 1744643971.885439504
	I0414 15:19:31.915635 1885939 fix.go:229] Guest: 2025-04-14 15:19:31.885439504 +0000 UTC Remote: 2025-04-14 15:19:31.80761553 +0000 UTC m=+23.118224648 (delta=77.823974ms)
	I0414 15:19:31.915667 1885939 fix.go:200] guest clock delta is within tolerance: 77.823974ms
	I0414 15:19:31.915674 1885939 start.go:83] releasing machines lock for "test-preload-191380", held for 18.808232184s
	I0414 15:19:31.915707 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:31.916010 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetIP
	I0414 15:19:31.918921 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.919292 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.919323 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.919455 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:31.920003 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:31.920200 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:31.920294 1885939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:19:31.920338 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.920414 1885939 ssh_runner.go:195] Run: cat /version.json
	I0414 15:19:31.920428 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:31.923304 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.923340 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.923731 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.923771 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.923828 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:31.923873 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:31.924048 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.924124 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:31.924284 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.924309 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:31.924447 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.924457 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:31.924600 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:31.924602 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:32.033075 1885939 ssh_runner.go:195] Run: systemctl --version
	I0414 15:19:32.039625 1885939 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:19:32.187799 1885939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:19:32.195427 1885939 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:19:32.195522 1885939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:19:32.213377 1885939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:19:32.213408 1885939 start.go:495] detecting cgroup driver to use...
	I0414 15:19:32.213474 1885939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:19:32.231382 1885939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:19:32.247510 1885939 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:19:32.247581 1885939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:19:32.262836 1885939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:19:32.278219 1885939 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:19:32.401990 1885939 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:19:32.561864 1885939 docker.go:233] disabling docker service ...
	I0414 15:19:32.561957 1885939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:19:32.576515 1885939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:19:32.590360 1885939 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:19:32.708454 1885939 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:19:32.836352 1885939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:19:32.851491 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:19:32.872207 1885939 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0414 15:19:32.872295 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.883707 1885939 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:19:32.883804 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.895312 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.906422 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.918012 1885939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:19:32.930149 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.941993 1885939 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.963553 1885939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:19:32.975604 1885939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:19:32.986601 1885939 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:19:32.986679 1885939 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:19:33.003115 1885939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:19:33.014111 1885939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:19:33.131257 1885939 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:19:33.224706 1885939 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:19:33.224801 1885939 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:19:33.231074 1885939 start.go:563] Will wait 60s for crictl version
	I0414 15:19:33.231228 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:33.235520 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:19:33.275297 1885939 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:19:33.275401 1885939 ssh_runner.go:195] Run: crio --version
	I0414 15:19:33.305062 1885939 ssh_runner.go:195] Run: crio --version
	I0414 15:19:33.337283 1885939 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0414 15:19:33.338464 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetIP
	I0414 15:19:33.341219 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:33.341533 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:33.341557 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:33.341783 1885939 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 15:19:33.346217 1885939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:19:33.359891 1885939 kubeadm.go:883] updating cluster {Name:test-preload-191380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-191380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:19:33.360032 1885939 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0414 15:19:33.360085 1885939 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:19:33.400944 1885939 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 15:19:33.401017 1885939 ssh_runner.go:195] Run: which lz4
	I0414 15:19:33.405640 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:19:33.410083 1885939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:19:33.410125 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0414 15:19:35.103015 1885939 crio.go:462] duration metric: took 1.697426266s to copy over tarball
	I0414 15:19:35.103098 1885939 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:19:37.615989 1885939 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.512857339s)
	I0414 15:19:37.616028 1885939 crio.go:469] duration metric: took 2.512973068s to extract the tarball
	I0414 15:19:37.616036 1885939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:19:37.657930 1885939 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:19:37.706484 1885939 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0414 15:19:37.706511 1885939 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 15:19:37.706580 1885939 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:37.706615 1885939 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:37.706629 1885939 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:37.706667 1885939 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:37.706672 1885939 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:37.706681 1885939 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:37.706580 1885939 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:19:37.706633 1885939 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0414 15:19:37.708162 1885939 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:37.708175 1885939 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:37.708181 1885939 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:37.708160 1885939 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:37.708210 1885939 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:37.708225 1885939 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:19:37.708233 1885939 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0414 15:19:37.708233 1885939 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:37.857371 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:37.860338 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:37.863104 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0414 15:19:37.863135 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:37.867521 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:37.867953 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:37.892252 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:37.980887 1885939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0414 15:19:37.980963 1885939 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:37.981041 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.027169 1885939 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0414 15:19:38.027242 1885939 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:38.027303 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.039444 1885939 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0414 15:19:38.039489 1885939 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0414 15:19:38.039563 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.048587 1885939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0414 15:19:38.048641 1885939 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:38.048649 1885939 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0414 15:19:38.048701 1885939 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:38.048720 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.048742 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.048720 1885939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0414 15:19:38.048785 1885939 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:38.048807 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.062306 1885939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0414 15:19:38.062348 1885939 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:38.062408 1885939 ssh_runner.go:195] Run: which crictl
	I0414 15:19:38.062417 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:38.062424 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:38.062481 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 15:19:38.062496 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:38.062567 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:38.062589 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:38.210400 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:38.210400 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:38.210474 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 15:19:38.210492 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:38.210620 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:38.210634 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:38.210691 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:38.363480 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0414 15:19:38.363480 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:38.386427 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0414 15:19:38.386503 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0414 15:19:38.386585 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0414 15:19:38.386652 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0414 15:19:38.386703 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0414 15:19:38.527592 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0414 15:19:38.527679 1885939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0414 15:19:38.527687 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0414 15:19:38.527740 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0414 15:19:38.527752 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0414 15:19:38.527788 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0414 15:19:38.527882 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0414 15:19:38.538008 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0414 15:19:38.538138 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 15:19:38.555390 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0414 15:19:38.555440 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0414 15:19:38.555530 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 15:19:38.555539 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 15:19:38.585784 1885939 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0414 15:19:38.585809 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0414 15:19:38.585844 1885939 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0414 15:19:38.585885 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0414 15:19:38.585894 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0414 15:19:38.585911 1885939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 15:19:38.585970 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0414 15:19:38.586053 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0414 15:19:38.586092 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0414 15:19:38.586129 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0414 15:19:38.630135 1885939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:19:42.455088 1885939 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.869151851s)
	I0414 15:19:42.455146 1885939 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0414 15:19:42.455144 1885939 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (3.869215159s)
	I0414 15:19:42.455174 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0414 15:19:42.455202 1885939 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0414 15:19:42.455202 1885939 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.825031422s)
	I0414 15:19:42.455284 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0414 15:19:44.604907 1885939 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.149587321s)
	I0414 15:19:44.604949 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0414 15:19:44.604983 1885939 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0414 15:19:44.605036 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0414 15:19:44.951008 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0414 15:19:44.951069 1885939 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 15:19:44.951131 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0414 15:19:45.404034 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0414 15:19:45.404100 1885939 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 15:19:45.404168 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0414 15:19:46.153777 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0414 15:19:46.153830 1885939 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 15:19:46.153880 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0414 15:19:46.905984 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0414 15:19:46.906046 1885939 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 15:19:46.906117 1885939 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0414 15:19:47.755581 1885939 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0414 15:19:47.755671 1885939 cache_images.go:123] Successfully loaded all cached images
	I0414 15:19:47.755681 1885939 cache_images.go:92] duration metric: took 10.049156317s to LoadCachedImages
	I0414 15:19:47.755700 1885939 kubeadm.go:934] updating node { 192.168.39.135 8443 v1.24.4 crio true true} ...
	I0414 15:19:47.755828 1885939 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-191380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-191380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 15:19:47.755911 1885939 ssh_runner.go:195] Run: crio config
	I0414 15:19:47.803050 1885939 cni.go:84] Creating CNI manager for ""
	I0414 15:19:47.803074 1885939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:19:47.803084 1885939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:19:47.803103 1885939 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-191380 NodeName:test-preload-191380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:19:47.803242 1885939 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-191380"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:19:47.803309 1885939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0414 15:19:47.814088 1885939 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:19:47.814171 1885939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:19:47.824622 1885939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0414 15:19:47.842469 1885939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:19:47.860560 1885939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0414 15:19:47.879921 1885939 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I0414 15:19:47.884263 1885939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:19:47.897865 1885939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:19:48.032317 1885939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:19:48.050810 1885939 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380 for IP: 192.168.39.135
	I0414 15:19:48.050836 1885939 certs.go:194] generating shared ca certs ...
	I0414 15:19:48.050854 1885939 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:19:48.051060 1885939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:19:48.051104 1885939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:19:48.051118 1885939 certs.go:256] generating profile certs ...
	I0414 15:19:48.051265 1885939 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/client.key
	I0414 15:19:48.051372 1885939 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/apiserver.key.a3956fb5
	I0414 15:19:48.051424 1885939 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/proxy-client.key
	I0414 15:19:48.051535 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:19:48.051566 1885939 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:19:48.051576 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:19:48.051604 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:19:48.051647 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:19:48.051672 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:19:48.051711 1885939 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:19:48.052361 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:19:48.114375 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:19:48.149740 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:19:48.185132 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:19:48.213395 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0414 15:19:48.240881 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:19:48.277888 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:19:48.308212 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 15:19:48.333753 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:19:48.359819 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:19:48.386286 1885939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:19:48.413022 1885939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:19:48.431512 1885939 ssh_runner.go:195] Run: openssl version
	I0414 15:19:48.437887 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:19:48.449643 1885939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:19:48.454722 1885939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:19:48.454802 1885939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:19:48.461178 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:19:48.472764 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:19:48.484730 1885939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:19:48.489847 1885939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:19:48.489935 1885939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:19:48.496194 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:19:48.508048 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:19:48.519719 1885939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:19:48.524805 1885939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:19:48.524877 1885939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:19:48.531141 1885939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:19:48.542678 1885939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:19:48.547898 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 15:19:48.554647 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 15:19:48.561458 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 15:19:48.568223 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 15:19:48.574582 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 15:19:48.581208 1885939 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 15:19:48.587710 1885939 kubeadm.go:392] StartCluster: {Name:test-preload-191380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
191380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:19:48.587838 1885939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:19:48.587891 1885939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:19:48.628844 1885939 cri.go:89] found id: ""
	I0414 15:19:48.628938 1885939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:19:48.639624 1885939 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 15:19:48.639647 1885939 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 15:19:48.639709 1885939 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 15:19:48.650209 1885939 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 15:19:48.650689 1885939 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-191380" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:19:48.650871 1885939 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1845971/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-191380" cluster setting kubeconfig missing "test-preload-191380" context setting]
	I0414 15:19:48.651188 1885939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:19:48.651685 1885939 kapi.go:59] client config for test-preload-191380: &rest.Config{Host:"https://192.168.39.135:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 15:19:48.652162 1885939 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0414 15:19:48.652178 1885939 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0414 15:19:48.652182 1885939 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0414 15:19:48.652185 1885939 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0414 15:19:48.652529 1885939 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 15:19:48.662819 1885939 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.135
	I0414 15:19:48.662870 1885939 kubeadm.go:1160] stopping kube-system containers ...
	I0414 15:19:48.662886 1885939 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 15:19:48.662958 1885939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:19:48.703448 1885939 cri.go:89] found id: ""
	I0414 15:19:48.703557 1885939 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 15:19:48.719840 1885939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:19:48.730226 1885939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:19:48.730246 1885939 kubeadm.go:157] found existing configuration files:
	
	I0414 15:19:48.730295 1885939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:19:48.740193 1885939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:19:48.740258 1885939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:19:48.750408 1885939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:19:48.760308 1885939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:19:48.760392 1885939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:19:48.770729 1885939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:19:48.780554 1885939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:19:48.780637 1885939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:19:48.791051 1885939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:19:48.801161 1885939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:19:48.801223 1885939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:19:48.811280 1885939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:19:48.821549 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:48.936331 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:49.580301 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:49.858993 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:49.941593 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:50.049512 1885939 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:19:50.049626 1885939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:19:50.550113 1885939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:19:51.050640 1885939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:19:51.068579 1885939 api_server.go:72] duration metric: took 1.019063604s to wait for apiserver process to appear ...
	I0414 15:19:51.068619 1885939 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:19:51.068648 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:51.069188 1885939 api_server.go:269] stopped: https://192.168.39.135:8443/healthz: Get "https://192.168.39.135:8443/healthz": dial tcp 192.168.39.135:8443: connect: connection refused
	I0414 15:19:51.568929 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:51.569694 1885939 api_server.go:269] stopped: https://192.168.39.135:8443/healthz: Get "https://192.168.39.135:8443/healthz": dial tcp 192.168.39.135:8443: connect: connection refused
	I0414 15:19:52.069460 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:55.466973 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 15:19:55.467022 1885939 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 15:19:55.467047 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:55.482238 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0414 15:19:55.482275 1885939 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0414 15:19:55.569596 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:55.599894 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 15:19:55.599928 1885939 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 15:19:56.069635 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:56.076944 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 15:19:56.076992 1885939 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 15:19:56.569761 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:56.575797 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0414 15:19:56.575828 1885939 api_server.go:103] status: https://192.168.39.135:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0414 15:19:57.069562 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:19:57.076817 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0414 15:19:57.088264 1885939 api_server.go:141] control plane version: v1.24.4
	I0414 15:19:57.088310 1885939 api_server.go:131] duration metric: took 6.019680741s to wait for apiserver health ...
	I0414 15:19:57.088324 1885939 cni.go:84] Creating CNI manager for ""
	I0414 15:19:57.088335 1885939 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:19:57.090201 1885939 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 15:19:57.091498 1885939 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 15:19:57.111519 1885939 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 15:19:57.145951 1885939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:19:57.159081 1885939 system_pods.go:59] 7 kube-system pods found
	I0414 15:19:57.159121 1885939 system_pods.go:61] "coredns-6d4b75cb6d-9fpb5" [7e330922-0354-42a3-9d47-12a8aeeff522] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0414 15:19:57.159127 1885939 system_pods.go:61] "etcd-test-preload-191380" [bef0e243-d4a8-47ac-a05d-cf117030fb85] Running
	I0414 15:19:57.159136 1885939 system_pods.go:61] "kube-apiserver-test-preload-191380" [e1edd3eb-2ad5-4fd0-8c82-93db014f1c0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0414 15:19:57.159140 1885939 system_pods.go:61] "kube-controller-manager-test-preload-191380" [d8de101a-96d1-4f8e-b456-70178fb7bfb0] Running
	I0414 15:19:57.159146 1885939 system_pods.go:61] "kube-proxy-lnh97" [5630392d-1c1a-4408-8b6b-a9dc9def928d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0414 15:19:57.159149 1885939 system_pods.go:61] "kube-scheduler-test-preload-191380" [2af63d37-9105-4a00-b6aa-47b00a37365b] Running
	I0414 15:19:57.159154 1885939 system_pods.go:61] "storage-provisioner" [f936894b-82a7-4e6c-a08d-f3fb602cb2ce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0414 15:19:57.159160 1885939 system_pods.go:74] duration metric: took 13.180157ms to wait for pod list to return data ...
	I0414 15:19:57.159171 1885939 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:19:57.163799 1885939 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:19:57.163834 1885939 node_conditions.go:123] node cpu capacity is 2
	I0414 15:19:57.163852 1885939 node_conditions.go:105] duration metric: took 4.676131ms to run NodePressure ...
	I0414 15:19:57.163878 1885939 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:19:57.403066 1885939 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0414 15:19:57.406585 1885939 kubeadm.go:739] kubelet initialised
	I0414 15:19:57.406610 1885939 kubeadm.go:740] duration metric: took 3.512178ms waiting for restarted kubelet to initialise ...
	I0414 15:19:57.406622 1885939 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:19:57.414495 1885939 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:57.421313 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.421349 1885939 pod_ready.go:82] duration metric: took 6.822419ms for pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:57.421362 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.421372 1885939 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:57.428566 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "etcd-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.428605 1885939 pod_ready.go:82] duration metric: took 7.204182ms for pod "etcd-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:57.428618 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "etcd-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.428626 1885939 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:57.435017 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "kube-apiserver-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.435043 1885939 pod_ready.go:82] duration metric: took 6.406081ms for pod "kube-apiserver-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:57.435053 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "kube-apiserver-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.435059 1885939 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:57.549847 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.549877 1885939 pod_ready.go:82] duration metric: took 114.808483ms for pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:57.549887 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.549894 1885939 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lnh97" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:57.950270 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "kube-proxy-lnh97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.950297 1885939 pod_ready.go:82] duration metric: took 400.395513ms for pod "kube-proxy-lnh97" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:57.950308 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "kube-proxy-lnh97" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:57.950314 1885939 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:19:58.349859 1885939 pod_ready.go:98] node "test-preload-191380" hosting pod "kube-scheduler-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:58.349901 1885939 pod_ready.go:82] duration metric: took 399.57838ms for pod "kube-scheduler-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	E0414 15:19:58.349915 1885939 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-191380" hosting pod "kube-scheduler-test-preload-191380" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-191380" has status "Ready":"False"
	I0414 15:19:58.349925 1885939 pod_ready.go:39] duration metric: took 943.284537ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:19:58.349963 1885939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:19:58.362861 1885939 ops.go:34] apiserver oom_adj: -16
	I0414 15:19:58.362885 1885939 kubeadm.go:597] duration metric: took 9.723231593s to restartPrimaryControlPlane
	I0414 15:19:58.362896 1885939 kubeadm.go:394] duration metric: took 9.775199868s to StartCluster
	I0414 15:19:58.362915 1885939 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:19:58.362991 1885939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:19:58.363660 1885939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:19:58.363936 1885939 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:19:58.364076 1885939 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:19:58.364186 1885939 addons.go:69] Setting storage-provisioner=true in profile "test-preload-191380"
	I0414 15:19:58.364196 1885939 addons.go:69] Setting default-storageclass=true in profile "test-preload-191380"
	I0414 15:19:58.364226 1885939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-191380"
	I0414 15:19:58.364227 1885939 addons.go:238] Setting addon storage-provisioner=true in "test-preload-191380"
	W0414 15:19:58.364241 1885939 addons.go:247] addon storage-provisioner should already be in state true
	I0414 15:19:58.364277 1885939 host.go:66] Checking if "test-preload-191380" exists ...
	I0414 15:19:58.364203 1885939 config.go:182] Loaded profile config "test-preload-191380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0414 15:19:58.364615 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:58.364643 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:58.364653 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:58.364678 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:58.366508 1885939 out.go:177] * Verifying Kubernetes components...
	I0414 15:19:58.367908 1885939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:19:58.381033 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0414 15:19:58.381611 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:58.382094 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:58.382118 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:58.382461 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:58.382660 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetState
	I0414 15:19:58.385224 1885939 kapi.go:59] client config for test-preload-191380: &rest.Config{Host:"https://192.168.39.135:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/client.crt", KeyFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/test-preload-191380/client.key", CAFile:"/home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0414 15:19:58.385546 1885939 addons.go:238] Setting addon default-storageclass=true in "test-preload-191380"
	W0414 15:19:58.385563 1885939 addons.go:247] addon default-storageclass should already be in state true
	I0414 15:19:58.385588 1885939 host.go:66] Checking if "test-preload-191380" exists ...
	I0414 15:19:58.385807 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0414 15:19:58.385867 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:58.385921 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:58.386249 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:58.386753 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:58.386779 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:58.387106 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:58.387713 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:58.387770 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:58.402909 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0414 15:19:58.403507 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:58.404008 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:58.404032 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:58.404361 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:58.405031 1885939 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:19:58.405078 1885939 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:19:58.407624 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0414 15:19:58.435206 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:58.435802 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:58.435827 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:58.436218 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:58.436460 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetState
	I0414 15:19:58.438399 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:58.440622 1885939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:19:58.442098 1885939 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:19:58.442124 1885939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:19:58.442149 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:58.445323 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:58.445789 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:58.445831 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:58.445984 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:58.446192 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:58.446378 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:58.446532 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:58.451517 1885939 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0414 15:19:58.451978 1885939 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:19:58.452426 1885939 main.go:141] libmachine: Using API Version  1
	I0414 15:19:58.452447 1885939 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:19:58.452886 1885939 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:19:58.453097 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetState
	I0414 15:19:58.454899 1885939 main.go:141] libmachine: (test-preload-191380) Calling .DriverName
	I0414 15:19:58.455155 1885939 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:19:58.455178 1885939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:19:58.455201 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHHostname
	I0414 15:19:58.457980 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:58.458447 1885939 main.go:141] libmachine: (test-preload-191380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:bd:bc", ip: ""} in network mk-test-preload-191380: {Iface:virbr1 ExpiryTime:2025-04-14 16:19:24 +0000 UTC Type:0 Mac:52:54:00:04:bd:bc Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:test-preload-191380 Clientid:01:52:54:00:04:bd:bc}
	I0414 15:19:58.458473 1885939 main.go:141] libmachine: (test-preload-191380) DBG | domain test-preload-191380 has defined IP address 192.168.39.135 and MAC address 52:54:00:04:bd:bc in network mk-test-preload-191380
	I0414 15:19:58.458635 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHPort
	I0414 15:19:58.458818 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHKeyPath
	I0414 15:19:58.458977 1885939 main.go:141] libmachine: (test-preload-191380) Calling .GetSSHUsername
	I0414 15:19:58.459102 1885939 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/test-preload-191380/id_rsa Username:docker}
	I0414 15:19:58.544276 1885939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:19:58.561340 1885939 node_ready.go:35] waiting up to 6m0s for node "test-preload-191380" to be "Ready" ...
	I0414 15:19:58.643907 1885939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:19:58.665100 1885939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:19:59.692961 1885939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.027810805s)
	I0414 15:19:59.693037 1885939 main.go:141] libmachine: Making call to close driver server
	I0414 15:19:59.693052 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Close
	I0414 15:19:59.693107 1885939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049159631s)
	I0414 15:19:59.693161 1885939 main.go:141] libmachine: Making call to close driver server
	I0414 15:19:59.693179 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Close
	I0414 15:19:59.693392 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Closing plugin on server side
	I0414 15:19:59.693426 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Closing plugin on server side
	I0414 15:19:59.693448 1885939 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:19:59.693455 1885939 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:19:59.693462 1885939 main.go:141] libmachine: Making call to close driver server
	I0414 15:19:59.693469 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Close
	I0414 15:19:59.693487 1885939 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:19:59.693508 1885939 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:19:59.693522 1885939 main.go:141] libmachine: Making call to close driver server
	I0414 15:19:59.693529 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Close
	I0414 15:19:59.693748 1885939 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:19:59.693761 1885939 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:19:59.693800 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Closing plugin on server side
	I0414 15:19:59.693827 1885939 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:19:59.693838 1885939 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:19:59.703461 1885939 main.go:141] libmachine: Making call to close driver server
	I0414 15:19:59.703490 1885939 main.go:141] libmachine: (test-preload-191380) Calling .Close
	I0414 15:19:59.703816 1885939 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:19:59.703837 1885939 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:19:59.703862 1885939 main.go:141] libmachine: (test-preload-191380) DBG | Closing plugin on server side
	I0414 15:19:59.705788 1885939 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0414 15:19:59.707036 1885939 addons.go:514] duration metric: took 1.342972577s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0414 15:20:00.566180 1885939 node_ready.go:53] node "test-preload-191380" has status "Ready":"False"
	I0414 15:20:03.065615 1885939 node_ready.go:53] node "test-preload-191380" has status "Ready":"False"
	I0414 15:20:05.066519 1885939 node_ready.go:53] node "test-preload-191380" has status "Ready":"False"
	I0414 15:20:06.065535 1885939 node_ready.go:49] node "test-preload-191380" has status "Ready":"True"
	I0414 15:20:06.065565 1885939 node_ready.go:38] duration metric: took 7.504183459s for node "test-preload-191380" to be "Ready" ...
	I0414 15:20:06.065577 1885939 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:20:06.068864 1885939 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:06.074191 1885939 pod_ready.go:93] pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:06.074218 1885939 pod_ready.go:82] duration metric: took 5.319544ms for pod "coredns-6d4b75cb6d-9fpb5" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:06.074230 1885939 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:06.078984 1885939 pod_ready.go:93] pod "etcd-test-preload-191380" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:06.079008 1885939 pod_ready.go:82] duration metric: took 4.77069ms for pod "etcd-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:06.079019 1885939 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.087545 1885939 pod_ready.go:103] pod "kube-apiserver-test-preload-191380" in "kube-system" namespace has status "Ready":"False"
	I0414 15:20:08.588599 1885939 pod_ready.go:93] pod "kube-apiserver-test-preload-191380" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:08.588628 1885939 pod_ready.go:82] duration metric: took 2.509601999s for pod "kube-apiserver-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.588640 1885939 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.597244 1885939 pod_ready.go:93] pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:08.597272 1885939 pod_ready.go:82] duration metric: took 8.624939ms for pod "kube-controller-manager-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.597286 1885939 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lnh97" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.608764 1885939 pod_ready.go:93] pod "kube-proxy-lnh97" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:08.608788 1885939 pod_ready.go:82] duration metric: took 11.496278ms for pod "kube-proxy-lnh97" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.608798 1885939 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.866998 1885939 pod_ready.go:93] pod "kube-scheduler-test-preload-191380" in "kube-system" namespace has status "Ready":"True"
	I0414 15:20:08.867028 1885939 pod_ready.go:82] duration metric: took 258.223261ms for pod "kube-scheduler-test-preload-191380" in "kube-system" namespace to be "Ready" ...
	I0414 15:20:08.867042 1885939 pod_ready.go:39] duration metric: took 2.801449399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:20:08.867061 1885939 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:20:08.867123 1885939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:20:08.884570 1885939 api_server.go:72] duration metric: took 10.520576273s to wait for apiserver process to appear ...
	I0414 15:20:08.884602 1885939 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:20:08.884620 1885939 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0414 15:20:08.892237 1885939 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0414 15:20:08.893303 1885939 api_server.go:141] control plane version: v1.24.4
	I0414 15:20:08.893325 1885939 api_server.go:131] duration metric: took 8.717996ms to wait for apiserver health ...
	I0414 15:20:08.893333 1885939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:20:09.065817 1885939 system_pods.go:59] 7 kube-system pods found
	I0414 15:20:09.065861 1885939 system_pods.go:61] "coredns-6d4b75cb6d-9fpb5" [7e330922-0354-42a3-9d47-12a8aeeff522] Running
	I0414 15:20:09.065869 1885939 system_pods.go:61] "etcd-test-preload-191380" [bef0e243-d4a8-47ac-a05d-cf117030fb85] Running
	I0414 15:20:09.065874 1885939 system_pods.go:61] "kube-apiserver-test-preload-191380" [e1edd3eb-2ad5-4fd0-8c82-93db014f1c0d] Running
	I0414 15:20:09.065880 1885939 system_pods.go:61] "kube-controller-manager-test-preload-191380" [d8de101a-96d1-4f8e-b456-70178fb7bfb0] Running
	I0414 15:20:09.065886 1885939 system_pods.go:61] "kube-proxy-lnh97" [5630392d-1c1a-4408-8b6b-a9dc9def928d] Running
	I0414 15:20:09.065891 1885939 system_pods.go:61] "kube-scheduler-test-preload-191380" [2af63d37-9105-4a00-b6aa-47b00a37365b] Running
	I0414 15:20:09.065895 1885939 system_pods.go:61] "storage-provisioner" [f936894b-82a7-4e6c-a08d-f3fb602cb2ce] Running
	I0414 15:20:09.065903 1885939 system_pods.go:74] duration metric: took 172.563105ms to wait for pod list to return data ...
	I0414 15:20:09.065917 1885939 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:20:09.265607 1885939 default_sa.go:45] found service account: "default"
	I0414 15:20:09.265639 1885939 default_sa.go:55] duration metric: took 199.7105ms for default service account to be created ...
	I0414 15:20:09.265654 1885939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:20:09.469013 1885939 system_pods.go:86] 7 kube-system pods found
	I0414 15:20:09.469047 1885939 system_pods.go:89] "coredns-6d4b75cb6d-9fpb5" [7e330922-0354-42a3-9d47-12a8aeeff522] Running
	I0414 15:20:09.469053 1885939 system_pods.go:89] "etcd-test-preload-191380" [bef0e243-d4a8-47ac-a05d-cf117030fb85] Running
	I0414 15:20:09.469064 1885939 system_pods.go:89] "kube-apiserver-test-preload-191380" [e1edd3eb-2ad5-4fd0-8c82-93db014f1c0d] Running
	I0414 15:20:09.469067 1885939 system_pods.go:89] "kube-controller-manager-test-preload-191380" [d8de101a-96d1-4f8e-b456-70178fb7bfb0] Running
	I0414 15:20:09.469070 1885939 system_pods.go:89] "kube-proxy-lnh97" [5630392d-1c1a-4408-8b6b-a9dc9def928d] Running
	I0414 15:20:09.469073 1885939 system_pods.go:89] "kube-scheduler-test-preload-191380" [2af63d37-9105-4a00-b6aa-47b00a37365b] Running
	I0414 15:20:09.469076 1885939 system_pods.go:89] "storage-provisioner" [f936894b-82a7-4e6c-a08d-f3fb602cb2ce] Running
	I0414 15:20:09.469084 1885939 system_pods.go:126] duration metric: took 203.422738ms to wait for k8s-apps to be running ...
	I0414 15:20:09.469092 1885939 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:20:09.469152 1885939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:20:09.484997 1885939 system_svc.go:56] duration metric: took 15.893431ms WaitForService to wait for kubelet
	I0414 15:20:09.485033 1885939 kubeadm.go:582] duration metric: took 11.121061416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:20:09.485051 1885939 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:20:09.665699 1885939 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:20:09.665726 1885939 node_conditions.go:123] node cpu capacity is 2
	I0414 15:20:09.665739 1885939 node_conditions.go:105] duration metric: took 180.684135ms to run NodePressure ...
	I0414 15:20:09.665750 1885939 start.go:241] waiting for startup goroutines ...
	I0414 15:20:09.665756 1885939 start.go:246] waiting for cluster config update ...
	I0414 15:20:09.665768 1885939 start.go:255] writing updated cluster config ...
	I0414 15:20:09.666054 1885939 ssh_runner.go:195] Run: rm -f paused
	I0414 15:20:09.718528 1885939 start.go:600] kubectl: 1.32.3, cluster: 1.24.4 (minor skew: 8)
	I0414 15:20:09.720440 1885939 out.go:201] 
	W0414 15:20:09.721795 1885939 out.go:270] ! /usr/local/bin/kubectl is version 1.32.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0414 15:20:09.722926 1885939 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0414 15:20:09.724016 1885939 out.go:177] * Done! kubectl is now configured to use "test-preload-191380" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.666264512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744644010666238387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99b4a695-4f97-4c4c-b4cb-fb5b91bc046e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.666753238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64ce834-09b4-4582-8867-69b52b1bea4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.666802119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64ce834-09b4-4582-8867-69b52b1bea4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.666952828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9749661945e688c533f261cbe2225ba0f2c64c1c214383f6f96f6a3f746ea64,PodSandboxId:bb674b4856938be85e4030f6d80a5982a46e69c4d12a9a226ca2e4b6784feb82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744644004097373475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9fpb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e330922-0354-42a3-9d47-12a8aeeff522,},Annotations:map[string]string{io.kubernetes.container.hash: 42741cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b9cba5694f819bb8e6669b46372856d1f73c77f11b0cfea211c010f5ed32ba,PodSandboxId:ebfdb3ec9e694b8dd3d924c021badde3f4f6a8f49ed48af135e94da2db5677ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744643997020246007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f936894b-82a7-4e6c-a08d-f3fb602cb2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6e0b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484401ea16f2dc25aa748b49f96364b9ce6e2ac46a5c554eb26d7a692330e60,PodSandboxId:06258bdb507ddcc87d5cca95ebe5d767c9bcab31460a4494f64c7344bc032852,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744643996704410333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnh97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
30392d-1c1a-4408-8b6b-a9dc9def928d,},Annotations:map[string]string{io.kubernetes.container.hash: 86a9c752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48018eb7c01fa2658ecbeadbcad03df944bf3f08b78f73935542b44662eb0d,PodSandboxId:a76dd501dbb9e32a98dfeb528de3a701eacff0eb4f1b7602370c4c9f1821ad33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744643990791760715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c5d727e6977e4cabf199f9f12fc0c25f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c743890d1e8efbe9380e527c4dc9ccce7f42ee716aec90b65688289baba4b72,PodSandboxId:9ee22f18ec743dab8a04ded1d664d505f9147392eab6545581ee2fad6a17bfe4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744643990797379435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 97d6ed6b7952d1a58bc5dd789f31c7d8,},Annotations:map[string]string{io.kubernetes.container.hash: ee7dc9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ab39cbb2010203807a739f37a93cb33e763e8e0919402bfada79673dcf66b0,PodSandboxId:e8989b170b37504e89c71f7c5fa5383ac34a8104e0f48b234cd9e387e61db589,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744643990779089520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe243fdca73029efd60451714801563d,},
Annotations:map[string]string{io.kubernetes.container.hash: 2e9e6711,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d172e8d742a2f358fe7d58cc3cb82a10256ca797a76d84e08bdb7632cecc501f,PodSandboxId:242fb575ce1dc7654d9ba81dd938bd5137da01d1f8fbefd3bfe62280a708c6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744643990737065128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15034ada2d1d7082f8e163504efb94d3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64ce834-09b4-4582-8867-69b52b1bea4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.706698947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82c8e786-af7b-4956-9ce4-2fd6a9702965 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.706771029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82c8e786-af7b-4956-9ce4-2fd6a9702965 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.707902914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91bd86de-f036-4438-96db-50705935758e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.708426482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744644010708404598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91bd86de-f036-4438-96db-50705935758e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.709067449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f6c697-ca64-42c5-af5f-5f438087a2e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.709177972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f6c697-ca64-42c5-af5f-5f438087a2e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.709376521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9749661945e688c533f261cbe2225ba0f2c64c1c214383f6f96f6a3f746ea64,PodSandboxId:bb674b4856938be85e4030f6d80a5982a46e69c4d12a9a226ca2e4b6784feb82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744644004097373475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9fpb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e330922-0354-42a3-9d47-12a8aeeff522,},Annotations:map[string]string{io.kubernetes.container.hash: 42741cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b9cba5694f819bb8e6669b46372856d1f73c77f11b0cfea211c010f5ed32ba,PodSandboxId:ebfdb3ec9e694b8dd3d924c021badde3f4f6a8f49ed48af135e94da2db5677ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744643997020246007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f936894b-82a7-4e6c-a08d-f3fb602cb2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6e0b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484401ea16f2dc25aa748b49f96364b9ce6e2ac46a5c554eb26d7a692330e60,PodSandboxId:06258bdb507ddcc87d5cca95ebe5d767c9bcab31460a4494f64c7344bc032852,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744643996704410333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnh97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
30392d-1c1a-4408-8b6b-a9dc9def928d,},Annotations:map[string]string{io.kubernetes.container.hash: 86a9c752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48018eb7c01fa2658ecbeadbcad03df944bf3f08b78f73935542b44662eb0d,PodSandboxId:a76dd501dbb9e32a98dfeb528de3a701eacff0eb4f1b7602370c4c9f1821ad33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744643990791760715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c5d727e6977e4cabf199f9f12fc0c25f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c743890d1e8efbe9380e527c4dc9ccce7f42ee716aec90b65688289baba4b72,PodSandboxId:9ee22f18ec743dab8a04ded1d664d505f9147392eab6545581ee2fad6a17bfe4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744643990797379435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 97d6ed6b7952d1a58bc5dd789f31c7d8,},Annotations:map[string]string{io.kubernetes.container.hash: ee7dc9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ab39cbb2010203807a739f37a93cb33e763e8e0919402bfada79673dcf66b0,PodSandboxId:e8989b170b37504e89c71f7c5fa5383ac34a8104e0f48b234cd9e387e61db589,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744643990779089520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe243fdca73029efd60451714801563d,},
Annotations:map[string]string{io.kubernetes.container.hash: 2e9e6711,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d172e8d742a2f358fe7d58cc3cb82a10256ca797a76d84e08bdb7632cecc501f,PodSandboxId:242fb575ce1dc7654d9ba81dd938bd5137da01d1f8fbefd3bfe62280a708c6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744643990737065128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15034ada2d1d7082f8e163504efb94d3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f6c697-ca64-42c5-af5f-5f438087a2e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.750507756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60d288ee-476b-4360-9e08-6ead4acba796 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.750602685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60d288ee-476b-4360-9e08-6ead4acba796 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.752302683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32362dd3-84c4-42a2-8ccb-70929a1ee41e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.752747996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744644010752727463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32362dd3-84c4-42a2-8ccb-70929a1ee41e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.753439455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c98be991-5929-4788-a549-b0170986b1b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.753506442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c98be991-5929-4788-a549-b0170986b1b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.754185211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9749661945e688c533f261cbe2225ba0f2c64c1c214383f6f96f6a3f746ea64,PodSandboxId:bb674b4856938be85e4030f6d80a5982a46e69c4d12a9a226ca2e4b6784feb82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744644004097373475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9fpb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e330922-0354-42a3-9d47-12a8aeeff522,},Annotations:map[string]string{io.kubernetes.container.hash: 42741cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b9cba5694f819bb8e6669b46372856d1f73c77f11b0cfea211c010f5ed32ba,PodSandboxId:ebfdb3ec9e694b8dd3d924c021badde3f4f6a8f49ed48af135e94da2db5677ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744643997020246007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f936894b-82a7-4e6c-a08d-f3fb602cb2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6e0b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484401ea16f2dc25aa748b49f96364b9ce6e2ac46a5c554eb26d7a692330e60,PodSandboxId:06258bdb507ddcc87d5cca95ebe5d767c9bcab31460a4494f64c7344bc032852,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744643996704410333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnh97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
30392d-1c1a-4408-8b6b-a9dc9def928d,},Annotations:map[string]string{io.kubernetes.container.hash: 86a9c752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48018eb7c01fa2658ecbeadbcad03df944bf3f08b78f73935542b44662eb0d,PodSandboxId:a76dd501dbb9e32a98dfeb528de3a701eacff0eb4f1b7602370c4c9f1821ad33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744643990791760715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c5d727e6977e4cabf199f9f12fc0c25f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c743890d1e8efbe9380e527c4dc9ccce7f42ee716aec90b65688289baba4b72,PodSandboxId:9ee22f18ec743dab8a04ded1d664d505f9147392eab6545581ee2fad6a17bfe4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744643990797379435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 97d6ed6b7952d1a58bc5dd789f31c7d8,},Annotations:map[string]string{io.kubernetes.container.hash: ee7dc9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ab39cbb2010203807a739f37a93cb33e763e8e0919402bfada79673dcf66b0,PodSandboxId:e8989b170b37504e89c71f7c5fa5383ac34a8104e0f48b234cd9e387e61db589,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744643990779089520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe243fdca73029efd60451714801563d,},
Annotations:map[string]string{io.kubernetes.container.hash: 2e9e6711,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d172e8d742a2f358fe7d58cc3cb82a10256ca797a76d84e08bdb7632cecc501f,PodSandboxId:242fb575ce1dc7654d9ba81dd938bd5137da01d1f8fbefd3bfe62280a708c6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744643990737065128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15034ada2d1d7082f8e163504efb94d3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c98be991-5929-4788-a549-b0170986b1b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.791148485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eaba0b34-641d-467e-a404-4b15547e07ac name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.791243502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eaba0b34-641d-467e-a404-4b15547e07ac name=/runtime.v1.RuntimeService/Version
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.792805244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1ea6106-0cef-4a0d-aa41-14f1477871c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.793329525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744644010793301582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1ea6106-0cef-4a0d-aa41-14f1477871c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.793867036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fef4f1f5-cab8-4ed2-8b29-b602c39927ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.794035100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fef4f1f5-cab8-4ed2-8b29-b602c39927ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:20:10 test-preload-191380 crio[671]: time="2025-04-14 15:20:10.794289822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9749661945e688c533f261cbe2225ba0f2c64c1c214383f6f96f6a3f746ea64,PodSandboxId:bb674b4856938be85e4030f6d80a5982a46e69c4d12a9a226ca2e4b6784feb82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1744644004097373475,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9fpb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e330922-0354-42a3-9d47-12a8aeeff522,},Annotations:map[string]string{io.kubernetes.container.hash: 42741cc6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09b9cba5694f819bb8e6669b46372856d1f73c77f11b0cfea211c010f5ed32ba,PodSandboxId:ebfdb3ec9e694b8dd3d924c021badde3f4f6a8f49ed48af135e94da2db5677ad,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744643997020246007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: f936894b-82a7-4e6c-a08d-f3fb602cb2ce,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6e0b62,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9484401ea16f2dc25aa748b49f96364b9ce6e2ac46a5c554eb26d7a692330e60,PodSandboxId:06258bdb507ddcc87d5cca95ebe5d767c9bcab31460a4494f64c7344bc032852,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1744643996704410333,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lnh97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56
30392d-1c1a-4408-8b6b-a9dc9def928d,},Annotations:map[string]string{io.kubernetes.container.hash: 86a9c752,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48018eb7c01fa2658ecbeadbcad03df944bf3f08b78f73935542b44662eb0d,PodSandboxId:a76dd501dbb9e32a98dfeb528de3a701eacff0eb4f1b7602370c4c9f1821ad33,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1744643990791760715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: c5d727e6977e4cabf199f9f12fc0c25f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c743890d1e8efbe9380e527c4dc9ccce7f42ee716aec90b65688289baba4b72,PodSandboxId:9ee22f18ec743dab8a04ded1d664d505f9147392eab6545581ee2fad6a17bfe4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1744643990797379435,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 97d6ed6b7952d1a58bc5dd789f31c7d8,},Annotations:map[string]string{io.kubernetes.container.hash: ee7dc9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3ab39cbb2010203807a739f37a93cb33e763e8e0919402bfada79673dcf66b0,PodSandboxId:e8989b170b37504e89c71f7c5fa5383ac34a8104e0f48b234cd9e387e61db589,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1744643990779089520,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe243fdca73029efd60451714801563d,},
Annotations:map[string]string{io.kubernetes.container.hash: 2e9e6711,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d172e8d742a2f358fe7d58cc3cb82a10256ca797a76d84e08bdb7632cecc501f,PodSandboxId:242fb575ce1dc7654d9ba81dd938bd5137da01d1f8fbefd3bfe62280a708c6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1744643990737065128,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-191380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15034ada2d1d7082f8e163504efb94d3,},Annotations
:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fef4f1f5-cab8-4ed2-8b29-b602c39927ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b9749661945e6       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   bb674b4856938       coredns-6d4b75cb6d-9fpb5
	09b9cba5694f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       2                   ebfdb3ec9e694       storage-provisioner
	9484401ea16f2       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   06258bdb507dd       kube-proxy-lnh97
	5c743890d1e8e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   9ee22f18ec743       kube-apiserver-test-preload-191380
	4c48018eb7c01       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   a76dd501dbb9e       kube-controller-manager-test-preload-191380
	e3ab39cbb2010       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   e8989b170b375       etcd-test-preload-191380
	d172e8d742a2f       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   242fb575ce1dc       kube-scheduler-test-preload-191380
	
	
	==> coredns [b9749661945e688c533f261cbe2225ba0f2c64c1c214383f6f96f6a3f746ea64] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46256 - 50776 "HINFO IN 8774940228889015293.5142496555324921791. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015249168s
	
	
	==> describe nodes <==
	Name:               test-preload-191380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-191380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2
	                    minikube.k8s.io/name=test-preload-191380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T15_18_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 15:18:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-191380
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 15:20:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 15:20:05 +0000   Mon, 14 Apr 2025 15:17:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 15:20:05 +0000   Mon, 14 Apr 2025 15:17:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 15:20:05 +0000   Mon, 14 Apr 2025 15:17:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 15:20:05 +0000   Mon, 14 Apr 2025 15:20:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    test-preload-191380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 de62876da2854180b5402d3e4e2fdc0b
	  System UUID:                de62876d-a285-4180-b540-2d3e4e2fdc0b
	  Boot ID:                    270658b0-bf19-4fed-8cf0-eecf6867d26e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9fpb5                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     115s
	  kube-system                 etcd-test-preload-191380                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m7s
	  kube-system                 kube-apiserver-test-preload-191380             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-test-preload-191380    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-lnh97                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-test-preload-191380             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m16s (x5 over 2m16s)  kubelet          Node test-preload-191380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x5 over 2m16s)  kubelet          Node test-preload-191380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x5 over 2m16s)  kubelet          Node test-preload-191380 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node test-preload-191380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node test-preload-191380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node test-preload-191380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                117s                   kubelet          Node test-preload-191380 status is now: NodeReady
	  Normal  RegisteredNode           116s                   node-controller  Node test-preload-191380 event: Registered Node test-preload-191380 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)      kubelet          Node test-preload-191380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)      kubelet          Node test-preload-191380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)      kubelet          Node test-preload-191380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-191380 event: Registered Node test-preload-191380 in Controller
	
	
	==> dmesg <==
	[Apr14 15:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051961] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040412] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.019359] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.789102] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.644012] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.082076] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.057682] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054181] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.186225] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.136685] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.295491] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[ +14.893109] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.061718] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.755364] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +4.763920] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.880105] systemd-fstab-generator[1779]: Ignoring "noauto" option for root device
	[Apr14 15:20] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [e3ab39cbb2010203807a739f37a93cb33e763e8e0919402bfada79673dcf66b0] <==
	{"level":"info","ts":"2025-04-14T15:19:51.294Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"a24066339cb4fbfd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-14T15:19:51.302Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T15:19:51.303Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a24066339cb4fbfd","initial-advertise-peer-urls":["https://192.168.39.135:2380"],"listen-peer-urls":["https://192.168.39.135:2380"],"advertise-client-urls":["https://192.168.39.135:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.135:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T15:19:51.303Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T15:19:51.305Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-14T15:19:51.306Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.135:2380"}
	{"level":"info","ts":"2025-04-14T15:19:51.306Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.135:2380"}
	{"level":"info","ts":"2025-04-14T15:19:51.309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd switched to configuration voters=(11691457004512279549)"}
	{"level":"info","ts":"2025-04-14T15:19:51.309Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7263b87883d60113","local-member-id":"a24066339cb4fbfd","added-peer-id":"a24066339cb4fbfd","added-peer-peer-urls":["https://192.168.39.135:2380"]}
	{"level":"info","ts":"2025-04-14T15:19:51.309Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7263b87883d60113","local-member-id":"a24066339cb4fbfd","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T15:19:51.309Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd received MsgPreVoteResp from a24066339cb4fbfd at term 2"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became candidate at term 3"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd received MsgVoteResp from a24066339cb4fbfd at term 3"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a24066339cb4fbfd became leader at term 3"}
	{"level":"info","ts":"2025-04-14T15:19:52.946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a24066339cb4fbfd elected leader a24066339cb4fbfd at term 3"}
	{"level":"info","ts":"2025-04-14T15:19:52.952Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"a24066339cb4fbfd","local-member-attributes":"{Name:test-preload-191380 ClientURLs:[https://192.168.39.135:2379]}","request-path":"/0/members/a24066339cb4fbfd/attributes","cluster-id":"7263b87883d60113","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T15:19:52.952Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T15:19:52.953Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T15:19:52.954Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T15:19:52.957Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.135:2379"}
	{"level":"info","ts":"2025-04-14T15:19:52.957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T15:19:52.957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 15:20:11 up 0 min,  0 users,  load average: 0.71, 0.20, 0.07
	Linux test-preload-191380 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5c743890d1e8efbe9380e527c4dc9ccce7f42ee716aec90b65688289baba4b72] <==
	I0414 15:19:55.425980       1 establishing_controller.go:76] Starting EstablishingController
	I0414 15:19:55.426164       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0414 15:19:55.426206       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0414 15:19:55.426252       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0414 15:19:55.482490       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0414 15:19:55.482523       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0414 15:19:55.482533       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0414 15:19:55.497645       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0414 15:19:55.514177       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0414 15:19:55.515453       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0414 15:19:55.518494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0414 15:19:55.524793       1 cache.go:39] Caches are synced for autoregister controller
	I0414 15:19:55.533444       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0414 15:19:55.547577       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0414 15:19:55.597573       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0414 15:19:56.125326       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0414 15:19:56.425569       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 15:19:57.033306       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0414 15:19:57.299340       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0414 15:19:57.310325       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0414 15:19:57.354712       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0414 15:19:57.375498       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 15:19:57.385396       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 15:20:07.895929       1 controller.go:611] quota admission added evaluator for: endpoints
	I0414 15:20:08.108429       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4c48018eb7c01fa2658ecbeadbcad03df944bf3f08b78f73935542b44662eb0d] <==
	I0414 15:20:07.926527       1 shared_informer.go:262] Caches are synced for crt configmap
	I0414 15:20:07.933581       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0414 15:20:07.934887       1 shared_informer.go:262] Caches are synced for HPA
	I0414 15:20:07.936179       1 shared_informer.go:262] Caches are synced for PVC protection
	I0414 15:20:07.938557       1 shared_informer.go:262] Caches are synced for service account
	I0414 15:20:07.938649       1 shared_informer.go:262] Caches are synced for GC
	I0414 15:20:07.939865       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0414 15:20:07.948331       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0414 15:20:08.023871       1 shared_informer.go:262] Caches are synced for taint
	I0414 15:20:08.024180       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0414 15:20:08.024315       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-191380. Assuming now as a timestamp.
	I0414 15:20:08.024367       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0414 15:20:08.024651       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0414 15:20:08.025384       1 event.go:294] "Event occurred" object="test-preload-191380" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-191380 event: Registered Node test-preload-191380 in Controller"
	I0414 15:20:08.031313       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0414 15:20:08.066028       1 shared_informer.go:262] Caches are synced for disruption
	I0414 15:20:08.066166       1 disruption.go:371] Sending events to api server.
	I0414 15:20:08.070561       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 15:20:08.088538       1 shared_informer.go:262] Caches are synced for deployment
	I0414 15:20:08.095069       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0414 15:20:08.113036       1 shared_informer.go:262] Caches are synced for resource quota
	I0414 15:20:08.141863       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0414 15:20:08.555246       1 shared_informer.go:262] Caches are synced for garbage collector
	I0414 15:20:08.555359       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0414 15:20:08.562307       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [9484401ea16f2dc25aa748b49f96364b9ce6e2ac46a5c554eb26d7a692330e60] <==
	I0414 15:19:56.948412       1 node.go:163] Successfully retrieved node IP: 192.168.39.135
	I0414 15:19:56.948767       1 server_others.go:138] "Detected node IP" address="192.168.39.135"
	I0414 15:19:56.948952       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0414 15:19:57.024041       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0414 15:19:57.024076       1 server_others.go:206] "Using iptables Proxier"
	I0414 15:19:57.024789       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0414 15:19:57.025776       1 server.go:661] "Version info" version="v1.24.4"
	I0414 15:19:57.025894       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 15:19:57.028013       1 config.go:317] "Starting service config controller"
	I0414 15:19:57.028269       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0414 15:19:57.028359       1 config.go:226] "Starting endpoint slice config controller"
	I0414 15:19:57.028366       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0414 15:19:57.029322       1 config.go:444] "Starting node config controller"
	I0414 15:19:57.029334       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0414 15:19:57.129245       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0414 15:19:57.129299       1 shared_informer.go:262] Caches are synced for service config
	I0414 15:19:57.130354       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d172e8d742a2f358fe7d58cc3cb82a10256ca797a76d84e08bdb7632cecc501f] <==
	I0414 15:19:51.847316       1 serving.go:348] Generated self-signed cert in-memory
	W0414 15:19:55.471191       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 15:19:55.473169       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 15:19:55.473253       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 15:19:55.473278       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 15:19:55.506775       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0414 15:19:55.506854       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 15:19:55.511879       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0414 15:19:55.512052       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 15:19:55.516956       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 15:19:55.512087       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0414 15:19:55.617236       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: I0414 15:19:55.611409    1130 setters.go:532] "Node became not ready" node="test-preload-191380" condition={Type:Ready Status:False LastHeartbeatTime:2025-04-14 15:19:55.611342528 +0000 UTC m=+5.761383085 LastTransitionTime:2025-04-14 15:19:55.611342528 +0000 UTC m=+5.761383085 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: I0414 15:19:55.980431    1130 apiserver.go:52] "Watching apiserver"
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: I0414 15:19:55.986771    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: I0414 15:19:55.986985    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: I0414 15:19:55.987074    1130 topology_manager.go:200] "Topology Admit Handler"
	Apr 14 15:19:55 test-preload-191380 kubelet[1130]: E0414 15:19:55.990008    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9fpb5" podUID=7e330922-0354-42a3-9d47-12a8aeeff522
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053604    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5630392d-1c1a-4408-8b6b-a9dc9def928d-kube-proxy\") pod \"kube-proxy-lnh97\" (UID: \"5630392d-1c1a-4408-8b6b-a9dc9def928d\") " pod="kube-system/kube-proxy-lnh97"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053669    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5630392d-1c1a-4408-8b6b-a9dc9def928d-xtables-lock\") pod \"kube-proxy-lnh97\" (UID: \"5630392d-1c1a-4408-8b6b-a9dc9def928d\") " pod="kube-system/kube-proxy-lnh97"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053691    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5630392d-1c1a-4408-8b6b-a9dc9def928d-lib-modules\") pod \"kube-proxy-lnh97\" (UID: \"5630392d-1c1a-4408-8b6b-a9dc9def928d\") " pod="kube-system/kube-proxy-lnh97"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053717    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume\") pod \"coredns-6d4b75cb6d-9fpb5\" (UID: \"7e330922-0354-42a3-9d47-12a8aeeff522\") " pod="kube-system/coredns-6d4b75cb6d-9fpb5"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053776    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkp5\" (UniqueName: \"kubernetes.io/projected/7e330922-0354-42a3-9d47-12a8aeeff522-kube-api-access-cfkp5\") pod \"coredns-6d4b75cb6d-9fpb5\" (UID: \"7e330922-0354-42a3-9d47-12a8aeeff522\") " pod="kube-system/coredns-6d4b75cb6d-9fpb5"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053801    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f936894b-82a7-4e6c-a08d-f3fb602cb2ce-tmp\") pod \"storage-provisioner\" (UID: \"f936894b-82a7-4e6c-a08d-f3fb602cb2ce\") " pod="kube-system/storage-provisioner"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053821    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhv56\" (UniqueName: \"kubernetes.io/projected/5630392d-1c1a-4408-8b6b-a9dc9def928d-kube-api-access-fhv56\") pod \"kube-proxy-lnh97\" (UID: \"5630392d-1c1a-4408-8b6b-a9dc9def928d\") " pod="kube-system/kube-proxy-lnh97"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053843    1130 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4kc\" (UniqueName: \"kubernetes.io/projected/f936894b-82a7-4e6c-a08d-f3fb602cb2ce-kube-api-access-7s4kc\") pod \"storage-provisioner\" (UID: \"f936894b-82a7-4e6c-a08d-f3fb602cb2ce\") " pod="kube-system/storage-provisioner"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: I0414 15:19:56.053857    1130 reconciler.go:159] "Reconciler: start to sync state"
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: E0414 15:19:56.157462    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: E0414 15:19:56.157615    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume podName:7e330922-0354-42a3-9d47-12a8aeeff522 nodeName:}" failed. No retries permitted until 2025-04-14 15:19:56.657570179 +0000 UTC m=+6.807610739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume") pod "coredns-6d4b75cb6d-9fpb5" (UID: "7e330922-0354-42a3-9d47-12a8aeeff522") : object "kube-system"/"coredns" not registered
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: E0414 15:19:56.661035    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 15:19:56 test-preload-191380 kubelet[1130]: E0414 15:19:56.661156    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume podName:7e330922-0354-42a3-9d47-12a8aeeff522 nodeName:}" failed. No retries permitted until 2025-04-14 15:19:57.661089538 +0000 UTC m=+7.811130082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume") pod "coredns-6d4b75cb6d-9fpb5" (UID: "7e330922-0354-42a3-9d47-12a8aeeff522") : object "kube-system"/"coredns" not registered
	Apr 14 15:19:57 test-preload-191380 kubelet[1130]: E0414 15:19:57.120058    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9fpb5" podUID=7e330922-0354-42a3-9d47-12a8aeeff522
	Apr 14 15:19:57 test-preload-191380 kubelet[1130]: E0414 15:19:57.677866    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 15:19:57 test-preload-191380 kubelet[1130]: E0414 15:19:57.677949    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume podName:7e330922-0354-42a3-9d47-12a8aeeff522 nodeName:}" failed. No retries permitted until 2025-04-14 15:19:59.677933363 +0000 UTC m=+9.827973920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume") pod "coredns-6d4b75cb6d-9fpb5" (UID: "7e330922-0354-42a3-9d47-12a8aeeff522") : object "kube-system"/"coredns" not registered
	Apr 14 15:19:59 test-preload-191380 kubelet[1130]: E0414 15:19:59.120683    1130 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9fpb5" podUID=7e330922-0354-42a3-9d47-12a8aeeff522
	Apr 14 15:19:59 test-preload-191380 kubelet[1130]: E0414 15:19:59.701061    1130 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 14 15:19:59 test-preload-191380 kubelet[1130]: E0414 15:19:59.701261    1130 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume podName:7e330922-0354-42a3-9d47-12a8aeeff522 nodeName:}" failed. No retries permitted until 2025-04-14 15:20:03.701234439 +0000 UTC m=+13.851274984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7e330922-0354-42a3-9d47-12a8aeeff522-config-volume") pod "coredns-6d4b75cb6d-9fpb5" (UID: "7e330922-0354-42a3-9d47-12a8aeeff522") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [09b9cba5694f819bb8e6669b46372856d1f73c77f11b0cfea211c010f5ed32ba] <==
	I0414 15:19:57.132187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-191380 -n test-preload-191380
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-191380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-191380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-191380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-191380: (1.07526715s)
--- FAIL: TestPreload (203.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (336.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m41.428026963s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-608146] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-608146" primary control-plane node in "kubernetes-upgrade-608146" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:24:54.511441 1891460 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:24:54.511631 1891460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:24:54.511646 1891460 out.go:358] Setting ErrFile to fd 2...
	I0414 15:24:54.511653 1891460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:24:54.511969 1891460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:24:54.512757 1891460 out.go:352] Setting JSON to false
	I0414 15:24:54.514196 1891460 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40039,"bootTime":1744604256,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:24:54.514281 1891460 start.go:139] virtualization: kvm guest
	I0414 15:24:54.516520 1891460 out.go:177] * [kubernetes-upgrade-608146] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:24:54.518003 1891460 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:24:54.518046 1891460 notify.go:220] Checking for updates...
	I0414 15:24:54.520496 1891460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:24:54.521751 1891460 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:24:54.522975 1891460 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:24:54.524141 1891460 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:24:54.525432 1891460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:24:54.526998 1891460 config.go:182] Loaded profile config "NoKubernetes-508923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0414 15:24:54.527160 1891460 config.go:182] Loaded profile config "pause-914049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:24:54.527257 1891460 config.go:182] Loaded profile config "running-upgrade-517744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 15:24:54.527370 1891460 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:24:54.563235 1891460 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:24:54.564401 1891460 start.go:297] selected driver: kvm2
	I0414 15:24:54.564432 1891460 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:24:54.564449 1891460 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:24:54.565380 1891460 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:24:54.565471 1891460 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:24:54.582637 1891460 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:24:54.582707 1891460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 15:24:54.583040 1891460 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 15:24:54.583079 1891460 cni.go:84] Creating CNI manager for ""
	I0414 15:24:54.583134 1891460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:24:54.583173 1891460 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 15:24:54.583254 1891460 start.go:340] cluster config:
	{Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:24:54.583397 1891460 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:24:54.585431 1891460 out.go:177] * Starting "kubernetes-upgrade-608146" primary control-plane node in "kubernetes-upgrade-608146" cluster
	I0414 15:24:54.586654 1891460 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:24:54.586737 1891460 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 15:24:54.586753 1891460 cache.go:56] Caching tarball of preloaded images
	I0414 15:24:54.586872 1891460 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:24:54.586888 1891460 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 15:24:54.587029 1891460 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/config.json ...
	I0414 15:24:54.587058 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/config.json: {Name:mk3a13c8144adba5a37f8e6b1f3db010cc376d7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:24:54.587260 1891460 start.go:360] acquireMachinesLock for kubernetes-upgrade-608146: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:25:00.279887 1891460 start.go:364] duration metric: took 5.692571456s to acquireMachinesLock for "kubernetes-upgrade-608146"
	I0414 15:25:00.279954 1891460 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:25:00.280153 1891460 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 15:25:00.282295 1891460 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 15:25:00.282648 1891460 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:25:00.282730 1891460 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:25:00.304391 1891460 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36569
	I0414 15:25:00.305006 1891460 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:25:00.305633 1891460 main.go:141] libmachine: Using API Version  1
	I0414 15:25:00.305660 1891460 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:25:00.306096 1891460 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:25:00.306328 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetMachineName
	I0414 15:25:00.306598 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:00.306797 1891460 start.go:159] libmachine.API.Create for "kubernetes-upgrade-608146" (driver="kvm2")
	I0414 15:25:00.306836 1891460 client.go:168] LocalClient.Create starting
	I0414 15:25:00.306877 1891460 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 15:25:00.306925 1891460 main.go:141] libmachine: Decoding PEM data...
	I0414 15:25:00.306949 1891460 main.go:141] libmachine: Parsing certificate...
	I0414 15:25:00.307039 1891460 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 15:25:00.307072 1891460 main.go:141] libmachine: Decoding PEM data...
	I0414 15:25:00.307094 1891460 main.go:141] libmachine: Parsing certificate...
	I0414 15:25:00.307123 1891460 main.go:141] libmachine: Running pre-create checks...
	I0414 15:25:00.307143 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .PreCreateCheck
	I0414 15:25:00.307580 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetConfigRaw
	I0414 15:25:00.308069 1891460 main.go:141] libmachine: Creating machine...
	I0414 15:25:00.308090 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .Create
	I0414 15:25:00.308385 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) creating KVM machine...
	I0414 15:25:00.308409 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) creating network...
	I0414 15:25:00.309811 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found existing default KVM network
	I0414 15:25:00.311386 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:00.311149 1891545 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000204b20}
	I0414 15:25:00.311446 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | created network xml: 
	I0414 15:25:00.311477 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | <network>
	I0414 15:25:00.311495 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   <name>mk-kubernetes-upgrade-608146</name>
	I0414 15:25:00.311524 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   <dns enable='no'/>
	I0414 15:25:00.311532 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   
	I0414 15:25:00.311541 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0414 15:25:00.311547 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |     <dhcp>
	I0414 15:25:00.311557 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0414 15:25:00.311591 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |     </dhcp>
	I0414 15:25:00.311612 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   </ip>
	I0414 15:25:00.311628 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG |   
	I0414 15:25:00.311638 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | </network>
	I0414 15:25:00.311652 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | 
	I0414 15:25:00.317223 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | trying to create private KVM network mk-kubernetes-upgrade-608146 192.168.39.0/24...
	I0414 15:25:00.398784 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | private KVM network mk-kubernetes-upgrade-608146 192.168.39.0/24 created
	I0414 15:25:00.398847 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:00.398776 1891545 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:25:00.398870 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146 ...
	I0414 15:25:00.398886 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 15:25:00.398911 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 15:25:00.686971 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:00.686801 1891545 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa...
	I0414 15:25:00.964361 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:00.964213 1891545 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/kubernetes-upgrade-608146.rawdisk...
	I0414 15:25:00.964413 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Writing magic tar header
	I0414 15:25:00.964431 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Writing SSH key tar header
	I0414 15:25:00.964443 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:00.964386 1891545 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146 ...
	I0414 15:25:00.964551 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146
	I0414 15:25:00.964576 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 15:25:00.964591 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146 (perms=drwx------)
	I0414 15:25:00.964622 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:25:00.964644 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 15:25:00.964660 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 15:25:00.964673 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 15:25:00.964684 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 15:25:00.964701 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 15:25:00.964712 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 15:25:00.964727 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 15:25:00.964740 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) creating domain...
	I0414 15:25:00.964834 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home/jenkins
	I0414 15:25:00.964867 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | checking permissions on dir: /home
	I0414 15:25:00.964883 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | skipping /home - not owner
	I0414 15:25:00.966037 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) define libvirt domain using xml: 
	I0414 15:25:00.966056 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) <domain type='kvm'>
	I0414 15:25:00.966063 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <name>kubernetes-upgrade-608146</name>
	I0414 15:25:00.966080 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <memory unit='MiB'>2200</memory>
	I0414 15:25:00.966088 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <vcpu>2</vcpu>
	I0414 15:25:00.966095 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <features>
	I0414 15:25:00.966102 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <acpi/>
	I0414 15:25:00.966109 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <apic/>
	I0414 15:25:00.966128 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <pae/>
	I0414 15:25:00.966138 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     
	I0414 15:25:00.966145 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   </features>
	I0414 15:25:00.966160 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <cpu mode='host-passthrough'>
	I0414 15:25:00.966169 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   
	I0414 15:25:00.966178 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   </cpu>
	I0414 15:25:00.966186 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <os>
	I0414 15:25:00.966196 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <type>hvm</type>
	I0414 15:25:00.966208 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <boot dev='cdrom'/>
	I0414 15:25:00.966218 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <boot dev='hd'/>
	I0414 15:25:00.966227 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <bootmenu enable='no'/>
	I0414 15:25:00.966236 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   </os>
	I0414 15:25:00.966244 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   <devices>
	I0414 15:25:00.966255 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <disk type='file' device='cdrom'>
	I0414 15:25:00.966292 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/boot2docker.iso'/>
	I0414 15:25:00.966317 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <target dev='hdc' bus='scsi'/>
	I0414 15:25:00.966336 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <readonly/>
	I0414 15:25:00.966349 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </disk>
	I0414 15:25:00.966359 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <disk type='file' device='disk'>
	I0414 15:25:00.966392 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 15:25:00.966443 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/kubernetes-upgrade-608146.rawdisk'/>
	I0414 15:25:00.966459 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <target dev='hda' bus='virtio'/>
	I0414 15:25:00.966467 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </disk>
	I0414 15:25:00.966475 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <interface type='network'>
	I0414 15:25:00.966487 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <source network='mk-kubernetes-upgrade-608146'/>
	I0414 15:25:00.966495 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <model type='virtio'/>
	I0414 15:25:00.966506 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </interface>
	I0414 15:25:00.966522 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <interface type='network'>
	I0414 15:25:00.966536 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <source network='default'/>
	I0414 15:25:00.966543 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <model type='virtio'/>
	I0414 15:25:00.966555 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </interface>
	I0414 15:25:00.966562 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <serial type='pty'>
	I0414 15:25:00.966574 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <target port='0'/>
	I0414 15:25:00.966581 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </serial>
	I0414 15:25:00.966594 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <console type='pty'>
	I0414 15:25:00.966602 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <target type='serial' port='0'/>
	I0414 15:25:00.966611 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </console>
	I0414 15:25:00.966620 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     <rng model='virtio'>
	I0414 15:25:00.966629 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)       <backend model='random'>/dev/random</backend>
	I0414 15:25:00.966639 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     </rng>
	I0414 15:25:00.966647 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     
	I0414 15:25:00.966656 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)     
	I0414 15:25:00.966664 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146)   </devices>
	I0414 15:25:00.966674 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) </domain>
	I0414 15:25:00.966686 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) 
	I0414 15:25:00.972273 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:f2:56:6e in network default
	I0414 15:25:00.972824 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:00.972837 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) starting domain...
	I0414 15:25:00.972846 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) ensuring networks are active...
	I0414 15:25:00.973674 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Ensuring network default is active
	I0414 15:25:00.973967 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Ensuring network mk-kubernetes-upgrade-608146 is active
	I0414 15:25:00.974530 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) getting domain XML...
	I0414 15:25:00.975289 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) creating domain...
	I0414 15:25:01.361311 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) waiting for IP...
	I0414 15:25:01.362111 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.362562 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.362624 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:01.362582 1891545 retry.go:31] will retry after 283.041494ms: waiting for domain to come up
	I0414 15:25:01.647139 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.647680 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.647708 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:01.647623 1891545 retry.go:31] will retry after 295.457525ms: waiting for domain to come up
	I0414 15:25:01.945531 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.946197 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:01.946225 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:01.946177 1891545 retry.go:31] will retry after 338.856612ms: waiting for domain to come up
	I0414 15:25:02.286905 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:02.287425 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:02.287498 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:02.287418 1891545 retry.go:31] will retry after 514.22532ms: waiting for domain to come up
	I0414 15:25:02.802710 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:02.803146 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:02.803173 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:02.803139 1891545 retry.go:31] will retry after 625.23288ms: waiting for domain to come up
	I0414 15:25:03.429587 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:03.430011 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:03.430063 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:03.429999 1891545 retry.go:31] will retry after 680.485742ms: waiting for domain to come up
	I0414 15:25:04.112097 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:04.112606 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:04.112631 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:04.112574 1891545 retry.go:31] will retry after 874.271659ms: waiting for domain to come up
	I0414 15:25:04.989094 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:04.989730 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:04.989759 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:04.989694 1891545 retry.go:31] will retry after 1.175241081s: waiting for domain to come up
	I0414 15:25:06.166671 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:06.167274 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:06.167302 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:06.167223 1891545 retry.go:31] will retry after 1.803947832s: waiting for domain to come up
	I0414 15:25:07.974065 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:07.974652 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:07.974682 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:07.974630 1891545 retry.go:31] will retry after 1.989411268s: waiting for domain to come up
	I0414 15:25:09.966909 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:09.967395 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:09.967474 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:09.967395 1891545 retry.go:31] will retry after 1.952702196s: waiting for domain to come up
	I0414 15:25:11.921417 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:11.921980 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:11.922009 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:11.921928 1891545 retry.go:31] will retry after 3.170060748s: waiting for domain to come up
	I0414 15:25:15.095504 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:15.096028 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:15.096078 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:15.095993 1891545 retry.go:31] will retry after 4.234770589s: waiting for domain to come up
	I0414 15:25:19.335561 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:19.335958 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find current IP address of domain kubernetes-upgrade-608146 in network mk-kubernetes-upgrade-608146
	I0414 15:25:19.335981 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | I0414 15:25:19.335926 1891545 retry.go:31] will retry after 4.486801541s: waiting for domain to come up
	I0414 15:25:23.827313 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:23.827797 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has current primary IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:23.827818 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) found domain IP: 192.168.39.243
	I0414 15:25:23.827831 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) reserving static IP address...
	I0414 15:25:23.828108 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-608146", mac: "52:54:00:34:dc:73", ip: "192.168.39.243"} in network mk-kubernetes-upgrade-608146
	I0414 15:25:23.912161 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) reserved static IP address 192.168.39.243 for domain kubernetes-upgrade-608146
	I0414 15:25:23.912197 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Getting to WaitForSSH function...
	I0414 15:25:23.912226 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) waiting for SSH...
	I0414 15:25:23.914994 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:23.915289 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146
	I0414 15:25:23.915315 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-608146 interface with MAC address 52:54:00:34:dc:73
	I0414 15:25:23.915493 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Using SSH client type: external
	I0414 15:25:23.915516 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa (-rw-------)
	I0414 15:25:23.915574 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:25:23.915595 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | About to run SSH command:
	I0414 15:25:23.915614 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | exit 0
	I0414 15:25:23.919827 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | SSH cmd err, output: exit status 255: 
	I0414 15:25:23.919853 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0414 15:25:23.919864 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | command : exit 0
	I0414 15:25:23.919872 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | err     : exit status 255
	I0414 15:25:23.919888 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | output  : 
	I0414 15:25:26.920807 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Getting to WaitForSSH function...
	I0414 15:25:26.924156 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:26.924717 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:26.924750 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:26.924852 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Using SSH client type: external
	I0414 15:25:26.924893 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa (-rw-------)
	I0414 15:25:26.924920 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:25:26.924938 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | About to run SSH command:
	I0414 15:25:26.924951 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | exit 0
	I0414 15:25:27.054842 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | SSH cmd err, output: <nil>: 
	I0414 15:25:27.055153 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) KVM machine creation complete
	I0414 15:25:27.055479 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetConfigRaw
	I0414 15:25:27.056179 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:27.056393 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:27.056601 1891460 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:25:27.056620 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetState
	I0414 15:25:27.058017 1891460 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:25:27.058032 1891460 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:25:27.058038 1891460 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:25:27.058044 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:27.060802 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.061246 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.061294 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.061474 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:27.061707 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.061900 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.062045 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:27.062203 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:27.062500 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:27.062516 1891460 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:25:27.178089 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:25:27.178117 1891460 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:25:27.178128 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:27.181273 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.181756 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.181794 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.181978 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:27.182200 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.182424 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.182586 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:27.182761 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:27.182985 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:27.182996 1891460 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:25:27.303526 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:25:27.303664 1891460 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:25:27.303684 1891460 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:25:27.303698 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetMachineName
	I0414 15:25:27.304054 1891460 buildroot.go:166] provisioning hostname "kubernetes-upgrade-608146"
	I0414 15:25:27.304084 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetMachineName
	I0414 15:25:27.304296 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:27.307337 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.307693 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.307728 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.307905 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:27.308101 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.308285 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.308423 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:27.308601 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:27.308806 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:27.308819 1891460 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-608146 && echo "kubernetes-upgrade-608146" | sudo tee /etc/hostname
	I0414 15:25:27.449360 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-608146
	
	I0414 15:25:27.449398 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:27.452282 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.452618 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.452639 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.452898 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:27.453105 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.453289 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:27.453413 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:27.453576 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:27.453915 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:27.453941 1891460 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-608146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-608146/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-608146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:25:27.576894 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:25:27.576929 1891460 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:25:27.576955 1891460 buildroot.go:174] setting up certificates
	I0414 15:25:27.576970 1891460 provision.go:84] configureAuth start
	I0414 15:25:27.576984 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetMachineName
	I0414 15:25:27.577290 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetIP
	I0414 15:25:27.580360 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.580824 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.580860 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.581040 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:27.583413 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.583787 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:27.583817 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:27.583973 1891460 provision.go:143] copyHostCerts
	I0414 15:25:27.584052 1891460 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:25:27.584073 1891460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:25:27.584130 1891460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:25:27.584232 1891460 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:25:27.584240 1891460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:25:27.584270 1891460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:25:27.584319 1891460 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:25:27.584326 1891460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:25:27.584342 1891460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:25:27.584385 1891460 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-608146 san=[127.0.0.1 192.168.39.243 kubernetes-upgrade-608146 localhost minikube]
	I0414 15:25:28.248420 1891460 provision.go:177] copyRemoteCerts
	I0414 15:25:28.248491 1891460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:25:28.248525 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.251457 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.251868 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.251904 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.252131 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.252336 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.252533 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.252701 1891460 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa Username:docker}
	I0414 15:25:28.345573 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0414 15:25:28.377698 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:25:28.409956 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:25:28.437620 1891460 provision.go:87] duration metric: took 860.629289ms to configureAuth
	I0414 15:25:28.437663 1891460 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:25:28.437943 1891460 config.go:182] Loaded profile config "kubernetes-upgrade-608146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:25:28.438073 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.440751 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.441171 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.441200 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.441377 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.441612 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.441807 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.441980 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.442135 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:28.442339 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:28.442355 1891460 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:25:28.682783 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:25:28.682817 1891460 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:25:28.682828 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetURL
	I0414 15:25:28.684264 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | using libvirt version 6000000
	I0414 15:25:28.686542 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.686867 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.686897 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.687035 1891460 main.go:141] libmachine: Docker is up and running!
	I0414 15:25:28.687047 1891460 main.go:141] libmachine: Reticulating splines...
	I0414 15:25:28.687054 1891460 client.go:171] duration metric: took 28.380207178s to LocalClient.Create
	I0414 15:25:28.687082 1891460 start.go:167] duration metric: took 28.380288556s to libmachine.API.Create "kubernetes-upgrade-608146"
	I0414 15:25:28.687096 1891460 start.go:293] postStartSetup for "kubernetes-upgrade-608146" (driver="kvm2")
	I0414 15:25:28.687110 1891460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:25:28.687128 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:28.687369 1891460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:25:28.687399 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.689669 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.690019 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.690055 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.690125 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.690316 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.690506 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.690701 1891460 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa Username:docker}
	I0414 15:25:28.778186 1891460 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:25:28.782769 1891460 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:25:28.782821 1891460 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:25:28.782959 1891460 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:25:28.783038 1891460 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:25:28.783125 1891460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:25:28.794107 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:25:28.820210 1891460 start.go:296] duration metric: took 133.089221ms for postStartSetup
	I0414 15:25:28.820269 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetConfigRaw
	I0414 15:25:28.820926 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetIP
	I0414 15:25:28.823570 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.823931 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.823954 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.824240 1891460 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/config.json ...
	I0414 15:25:28.824451 1891460 start.go:128] duration metric: took 28.544277239s to createHost
	I0414 15:25:28.824474 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.826795 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.827113 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.827137 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.827332 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.827544 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.827691 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.827824 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.827962 1891460 main.go:141] libmachine: Using SSH client type: native
	I0414 15:25:28.828241 1891460 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.39.243 22 <nil> <nil>}
	I0414 15:25:28.828259 1891460 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:25:28.943508 1891460 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744644328.922035611
	
	I0414 15:25:28.943558 1891460 fix.go:216] guest clock: 1744644328.922035611
	I0414 15:25:28.943566 1891460 fix.go:229] Guest: 2025-04-14 15:25:28.922035611 +0000 UTC Remote: 2025-04-14 15:25:28.824462774 +0000 UTC m=+34.357798793 (delta=97.572837ms)
	I0414 15:25:28.943599 1891460 fix.go:200] guest clock delta is within tolerance: 97.572837ms
	I0414 15:25:28.943604 1891460 start.go:83] releasing machines lock for "kubernetes-upgrade-608146", held for 28.663685499s
	I0414 15:25:28.943636 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:28.943984 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetIP
	I0414 15:25:28.947160 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.947543 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.947576 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.947809 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:28.948374 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:28.948557 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:25:28.948661 1891460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:25:28.948717 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.948759 1891460 ssh_runner.go:195] Run: cat /version.json
	I0414 15:25:28.948809 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHHostname
	I0414 15:25:28.951322 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.951540 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.951692 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.951714 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.951901 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:28.951923 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:28.951961 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.952111 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHPort
	I0414 15:25:28.952202 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.952284 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHKeyPath
	I0414 15:25:28.952367 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.952454 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetSSHUsername
	I0414 15:25:28.952504 1891460 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa Username:docker}
	I0414 15:25:28.952566 1891460 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/kubernetes-upgrade-608146/id_rsa Username:docker}
	I0414 15:25:29.040285 1891460 ssh_runner.go:195] Run: systemctl --version
	I0414 15:25:29.064335 1891460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:25:29.235375 1891460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:25:29.241929 1891460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:25:29.242018 1891460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:25:29.261531 1891460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:25:29.261568 1891460 start.go:495] detecting cgroup driver to use...
	I0414 15:25:29.261652 1891460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:25:29.286292 1891460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:25:29.308402 1891460 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:25:29.308482 1891460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:25:29.325548 1891460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:25:29.341888 1891460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:25:29.487184 1891460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:25:29.660479 1891460 docker.go:233] disabling docker service ...
	I0414 15:25:29.660611 1891460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:25:29.678029 1891460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:25:29.694339 1891460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:25:29.847815 1891460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:25:29.977661 1891460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:25:29.995011 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:25:30.015936 1891460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 15:25:30.016021 1891460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:25:30.030105 1891460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:25:30.030190 1891460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:25:30.044373 1891460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:25:30.059156 1891460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:25:30.073038 1891460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:25:30.087295 1891460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:25:30.099828 1891460 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:25:30.099901 1891460 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:25:30.116142 1891460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:25:30.128209 1891460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:25:30.255743 1891460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:25:30.370447 1891460 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:25:30.370542 1891460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:25:30.376206 1891460 start.go:563] Will wait 60s for crictl version
	I0414 15:25:30.376289 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:30.382000 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:25:30.437505 1891460 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:25:30.437609 1891460 ssh_runner.go:195] Run: crio --version
	I0414 15:25:30.469859 1891460 ssh_runner.go:195] Run: crio --version
	I0414 15:25:30.505973 1891460 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 15:25:30.507345 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetIP
	I0414 15:25:30.510655 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:30.511120 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:dc:73", ip: ""} in network mk-kubernetes-upgrade-608146: {Iface:virbr1 ExpiryTime:2025-04-14 16:25:15 +0000 UTC Type:0 Mac:52:54:00:34:dc:73 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:kubernetes-upgrade-608146 Clientid:01:52:54:00:34:dc:73}
	I0414 15:25:30.511159 1891460 main.go:141] libmachine: (kubernetes-upgrade-608146) DBG | domain kubernetes-upgrade-608146 has defined IP address 192.168.39.243 and MAC address 52:54:00:34:dc:73 in network mk-kubernetes-upgrade-608146
	I0414 15:25:30.511398 1891460 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0414 15:25:30.516389 1891460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:25:30.534008 1891460 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:25:30.534163 1891460 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:25:30.534219 1891460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:25:30.571755 1891460 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:25:30.571836 1891460 ssh_runner.go:195] Run: which lz4
	I0414 15:25:30.576398 1891460 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:25:30.581454 1891460 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:25:30.581500 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 15:25:32.553916 1891460 crio.go:462] duration metric: took 1.977581268s to copy over tarball
	I0414 15:25:32.554027 1891460 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:25:35.293909 1891460 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.73983615s)
	I0414 15:25:35.293987 1891460 crio.go:469] duration metric: took 2.739995471s to extract the tarball
	I0414 15:25:35.294003 1891460 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:25:35.337897 1891460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:25:35.391902 1891460 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:25:35.391940 1891460 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 15:25:35.392024 1891460 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:25:35.392101 1891460 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 15:25:35.392117 1891460 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.392135 1891460 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:35.392136 1891460 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.392217 1891460 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.392073 1891460 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.392075 1891460 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.394029 1891460 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.394098 1891460 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:35.394031 1891460 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.394032 1891460 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 15:25:35.394041 1891460 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:25:35.394136 1891460 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.394459 1891460 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.394463 1891460 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.526799 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.527851 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.537047 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.540093 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.561046 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 15:25:35.590194 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.590867 1891460 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 15:25:35.590914 1891460 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.590953 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.591366 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:35.654312 1891460 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 15:25:35.654406 1891460 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.654470 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.720649 1891460 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 15:25:35.720704 1891460 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.720726 1891460 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 15:25:35.720766 1891460 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.720777 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.720818 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.728188 1891460 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 15:25:35.728250 1891460 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 15:25:35.728308 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.736952 1891460 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 15:25:35.737026 1891460 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.737082 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.737096 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.737154 1891460 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 15:25:35.737178 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.737203 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.737221 1891460 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:35.737257 1891460 ssh_runner.go:195] Run: which crictl
	I0414 15:25:35.739213 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.742389 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:25:35.825978 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.826222 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:25:35.878700 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:35.878760 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:35.878705 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:35.878950 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:35.879005 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:25:35.946554 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:35.951405 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:25:36.079386 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:36.079489 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:25:36.079555 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:25:36.079608 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:25:36.079644 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:25:36.127225 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:25:36.127304 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 15:25:36.227042 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 15:25:36.227131 1891460 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:25:36.227153 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 15:25:36.227227 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 15:25:36.229425 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 15:25:36.246989 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 15:25:36.276526 1891460 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 15:25:36.417071 1891460 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:25:36.560160 1891460 cache_images.go:92] duration metric: took 1.16819449s to LoadCachedImages
	W0414 15:25:36.560268 1891460 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 15:25:36.560286 1891460 kubeadm.go:934] updating node { 192.168.39.243 8443 v1.20.0 crio true true} ...
	I0414 15:25:36.560410 1891460 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-608146 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 15:25:36.560502 1891460 ssh_runner.go:195] Run: crio config
	I0414 15:25:36.612191 1891460 cni.go:84] Creating CNI manager for ""
	I0414 15:25:36.612220 1891460 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:25:36.612235 1891460 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:25:36.612259 1891460 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-608146 NodeName:kubernetes-upgrade-608146 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 15:25:36.612460 1891460 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-608146"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:25:36.612558 1891460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 15:25:36.623364 1891460 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:25:36.623456 1891460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:25:36.634188 1891460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0414 15:25:36.652481 1891460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:25:36.670441 1891460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0414 15:25:36.689124 1891460 ssh_runner.go:195] Run: grep 192.168.39.243	control-plane.minikube.internal$ /etc/hosts
	I0414 15:25:36.693649 1891460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:25:36.715462 1891460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:25:36.853275 1891460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:25:36.872826 1891460 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146 for IP: 192.168.39.243
	I0414 15:25:36.872861 1891460 certs.go:194] generating shared ca certs ...
	I0414 15:25:36.872884 1891460 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:36.873079 1891460 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:25:36.873121 1891460 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:25:36.873136 1891460 certs.go:256] generating profile certs ...
	I0414 15:25:36.873213 1891460 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.key
	I0414 15:25:36.873254 1891460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.crt with IP's: []
	I0414 15:25:37.348005 1891460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.crt ...
	I0414 15:25:37.348048 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.crt: {Name:mk404dae8c88322a947db3ef9050c62e9b17f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.348264 1891460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.key ...
	I0414 15:25:37.348284 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/client.key: {Name:mkd353d710ac25808298344620523b51fbe66a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.348404 1891460 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key.529805af
	I0414 15:25:37.348426 1891460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt.529805af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.243]
	I0414 15:25:37.413659 1891460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt.529805af ...
	I0414 15:25:37.413699 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt.529805af: {Name:mk8580d2aee725cace5520d4ff915ab52d696085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.488440 1891460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key.529805af ...
	I0414 15:25:37.488504 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key.529805af: {Name:mkb76810bcf4b42301c0a757b806fd6fd1f1a9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.488694 1891460 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt.529805af -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt
	I0414 15:25:37.488818 1891460 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key.529805af -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key
	I0414 15:25:37.488908 1891460 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.key
	I0414 15:25:37.488934 1891460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.crt with IP's: []
	I0414 15:25:37.821838 1891460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.crt ...
	I0414 15:25:37.821880 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.crt: {Name:mk7f7912ff2e306fbd61072e7df2a2523d2be19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.822117 1891460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.key ...
	I0414 15:25:37.822140 1891460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.key: {Name:mk048877334b7c77b39c6dff79e349ad98e8e38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:25:37.822404 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:25:37.822462 1891460 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:25:37.822482 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:25:37.822519 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:25:37.822571 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:25:37.822607 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:25:37.822669 1891460 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:25:37.823279 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:25:37.883198 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:25:37.923128 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:25:37.964980 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:25:38.001077 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0414 15:25:38.032736 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0414 15:25:38.063854 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:25:38.102423 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 15:25:38.137858 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:25:38.172141 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:25:38.211888 1891460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:25:38.249371 1891460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:25:38.274382 1891460 ssh_runner.go:195] Run: openssl version
	I0414 15:25:38.281512 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:25:38.295183 1891460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:25:38.301179 1891460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:25:38.301266 1891460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:25:38.309267 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:25:38.323020 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:25:38.337503 1891460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:25:38.343907 1891460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:25:38.344006 1891460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:25:38.350998 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:25:38.364525 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:25:38.377135 1891460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:25:38.382756 1891460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:25:38.382823 1891460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:25:38.389534 1891460 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:25:38.403115 1891460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:25:38.408691 1891460 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:25:38.408777 1891460 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:25:38.408890 1891460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:25:38.408968 1891460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:25:38.461656 1891460 cri.go:89] found id: ""
	I0414 15:25:38.461753 1891460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:25:38.478865 1891460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:25:38.498858 1891460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:25:38.524401 1891460 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:25:38.524496 1891460 kubeadm.go:157] found existing configuration files:
	
	I0414 15:25:38.524599 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:25:38.544693 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:25:38.544872 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:25:38.564742 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:25:38.589940 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:25:38.590091 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:25:38.603278 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:25:38.614771 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:25:38.614853 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:25:38.626768 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:25:38.637610 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:25:38.637680 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:25:38.649204 1891460 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:25:38.992872 1891460 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:27:37.132759 1891460 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:27:37.132903 1891460 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:27:37.134651 1891460 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:27:37.134793 1891460 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:27:37.134954 1891460 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:27:37.135083 1891460 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:27:37.135198 1891460 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:27:37.135282 1891460 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:27:37.137252 1891460 out.go:235]   - Generating certificates and keys ...
	I0414 15:27:37.137374 1891460 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:27:37.137477 1891460 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:27:37.137634 1891460 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:27:37.137721 1891460 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:27:37.137832 1891460 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:27:37.137922 1891460 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:27:37.138001 1891460 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:27:37.138191 1891460 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0414 15:27:37.138270 1891460 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:27:37.138458 1891460 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	I0414 15:27:37.138545 1891460 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:27:37.138640 1891460 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:27:37.138709 1891460 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:27:37.138810 1891460 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:27:37.138964 1891460 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:27:37.139061 1891460 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:27:37.139146 1891460 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:27:37.139226 1891460 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:27:37.139372 1891460 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:27:37.139489 1891460 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:27:37.139547 1891460 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:27:37.139660 1891460 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:27:37.141466 1891460 out.go:235]   - Booting up control plane ...
	I0414 15:27:37.141647 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:27:37.141758 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:27:37.141858 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:27:37.141974 1891460 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:27:37.142189 1891460 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:27:37.142260 1891460 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:27:37.142358 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:27:37.142652 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:27:37.142753 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:27:37.143026 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:27:37.143126 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:27:37.143386 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:27:37.143492 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:27:37.143771 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:27:37.143871 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:27:37.144143 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:27:37.144156 1891460 kubeadm.go:310] 
	I0414 15:27:37.144213 1891460 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:27:37.144274 1891460 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:27:37.144284 1891460 kubeadm.go:310] 
	I0414 15:27:37.144333 1891460 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:27:37.144384 1891460 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:27:37.144544 1891460 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:27:37.144558 1891460 kubeadm.go:310] 
	I0414 15:27:37.144705 1891460 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:27:37.144755 1891460 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:27:37.144803 1891460 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:27:37.144813 1891460 kubeadm.go:310] 
	I0414 15:27:37.144979 1891460 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:27:37.145102 1891460 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:27:37.145113 1891460 kubeadm.go:310] 
	I0414 15:27:37.145262 1891460 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:27:37.145389 1891460 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:27:37.145508 1891460 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:27:37.145645 1891460 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0414 15:27:37.145847 1891460 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-608146 localhost] and IPs [192.168.39.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 15:27:37.145904 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 15:27:37.146239 1891460 kubeadm.go:310] 
	I0414 15:27:38.627132 1891460 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.481198835s)
	I0414 15:27:38.627227 1891460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:27:38.651571 1891460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:27:38.688859 1891460 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:27:38.688890 1891460 kubeadm.go:157] found existing configuration files:
	
	I0414 15:27:38.688955 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:27:38.709131 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:27:38.709212 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:27:38.724820 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:27:38.741954 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:27:38.742050 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:27:38.755700 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:27:38.768813 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:27:38.768903 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:27:38.782816 1891460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:27:38.797540 1891460 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:27:38.797615 1891460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:27:38.810790 1891460 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:27:38.901403 1891460 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:27:38.901489 1891460 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:27:39.063960 1891460 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:27:39.064100 1891460 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:27:39.064230 1891460 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:27:39.292584 1891460 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:27:39.501902 1891460 out.go:235]   - Generating certificates and keys ...
	I0414 15:27:39.502067 1891460 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:27:39.502154 1891460 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:27:39.502242 1891460 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 15:27:39.502387 1891460 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 15:27:39.502489 1891460 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 15:27:39.502569 1891460 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 15:27:39.502709 1891460 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 15:27:39.502822 1891460 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 15:27:39.502948 1891460 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 15:27:39.503062 1891460 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 15:27:39.503152 1891460 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 15:27:39.503256 1891460 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:27:39.503345 1891460 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:27:39.568668 1891460 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:27:39.847504 1891460 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:27:39.998767 1891460 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:27:40.017383 1891460 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:27:40.021567 1891460 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:27:40.021773 1891460 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:27:40.188122 1891460 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:27:40.189996 1891460 out.go:235]   - Booting up control plane ...
	I0414 15:27:40.190117 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:27:40.211436 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:27:40.213560 1891460 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:27:40.217492 1891460 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:27:40.219291 1891460 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:28:20.223407 1891460 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:28:20.223773 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:28:20.224063 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:28:25.224782 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:28:25.225049 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:28:35.224901 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:28:35.225113 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:28:55.223998 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:28:55.224290 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:29:35.224259 1891460 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:29:35.224866 1891460 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:29:35.224894 1891460 kubeadm.go:310] 
	I0414 15:29:35.224977 1891460 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:29:35.225063 1891460 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:29:35.225073 1891460 kubeadm.go:310] 
	I0414 15:29:35.225145 1891460 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:29:35.225216 1891460 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:29:35.225434 1891460 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:29:35.225444 1891460 kubeadm.go:310] 
	I0414 15:29:35.225675 1891460 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:29:35.225749 1891460 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:29:35.225817 1891460 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:29:35.225826 1891460 kubeadm.go:310] 
	I0414 15:29:35.226055 1891460 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:29:35.226347 1891460 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:29:35.226402 1891460 kubeadm.go:310] 
	I0414 15:29:35.226702 1891460 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:29:35.226936 1891460 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:29:35.227113 1891460 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:29:35.227277 1891460 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:29:35.227314 1891460 kubeadm.go:310] 
	I0414 15:29:35.227635 1891460 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:29:35.227828 1891460 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:29:35.228041 1891460 kubeadm.go:394] duration metric: took 3m56.819277528s to StartCluster
	I0414 15:29:35.228076 1891460 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:29:35.228193 1891460 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:29:35.228511 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:29:35.276434 1891460 cri.go:89] found id: ""
	I0414 15:29:35.276472 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.276483 1891460 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:29:35.276506 1891460 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:29:35.276587 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:29:35.322691 1891460 cri.go:89] found id: ""
	I0414 15:29:35.322726 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.322738 1891460 logs.go:284] No container was found matching "etcd"
	I0414 15:29:35.322745 1891460 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:29:35.322815 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:29:35.359121 1891460 cri.go:89] found id: ""
	I0414 15:29:35.359156 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.359167 1891460 logs.go:284] No container was found matching "coredns"
	I0414 15:29:35.359175 1891460 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:29:35.359257 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:29:35.395929 1891460 cri.go:89] found id: ""
	I0414 15:29:35.395963 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.395976 1891460 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:29:35.395983 1891460 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:29:35.396062 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:29:35.432676 1891460 cri.go:89] found id: ""
	I0414 15:29:35.432715 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.432727 1891460 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:29:35.432735 1891460 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:29:35.432805 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:29:35.470419 1891460 cri.go:89] found id: ""
	I0414 15:29:35.470450 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.470459 1891460 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:29:35.470466 1891460 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:29:35.470534 1891460 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:29:35.506827 1891460 cri.go:89] found id: ""
	I0414 15:29:35.506865 1891460 logs.go:282] 0 containers: []
	W0414 15:29:35.506877 1891460 logs.go:284] No container was found matching "kindnet"
	I0414 15:29:35.506892 1891460 logs.go:123] Gathering logs for kubelet ...
	I0414 15:29:35.506907 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:29:35.564303 1891460 logs.go:123] Gathering logs for dmesg ...
	I0414 15:29:35.564358 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:29:35.580488 1891460 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:29:35.580529 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:29:35.720896 1891460 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:29:35.720933 1891460 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:29:35.720951 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:29:35.832322 1891460 logs.go:123] Gathering logs for container status ...
	I0414 15:29:35.832368 1891460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:29:35.875450 1891460 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:29:35.875518 1891460 out.go:270] * 
	* 
	W0414 15:29:35.875655 1891460 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:29:35.875677 1891460 out.go:270] * 
	* 
	W0414 15:29:35.876544 1891460 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:29:35.879830 1891460 out.go:201] 
	W0414 15:29:35.881057 1891460 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:29:35.881105 1891460 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:29:35.881131 1891460 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:29:35.883257 1891460 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-608146
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-608146: (1.349357317s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-608146 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-608146 status --format={{.Host}}: exit status 7 (84.534191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.035140338s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-608146 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (99.239299ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-608146] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-608146
	    minikube start -p kubernetes-upgrade-608146 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6081462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-608146 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-608146 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (13.581608685s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-14 15:30:27.171268053 +0000 UTC m=+4405.164026556
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-608146 -n kubernetes-upgrade-608146
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-608146 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-608146 logs -n 25: (1.608372589s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-508923                                | NoKubernetes-508923       | jenkins | v1.35.0 | 14 Apr 25 15:25 UTC | 14 Apr 25 15:25 UTC |
	| start   | -p NoKubernetes-508923                                | NoKubernetes-508923       | jenkins | v1.35.0 | 14 Apr 25 15:25 UTC | 14 Apr 25 15:26 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-517744                             | running-upgrade-517744    | jenkins | v1.35.0 | 14 Apr 25 15:25 UTC | 14 Apr 25 15:25 UTC |
	| start   | -p cert-expiration-197648                             | cert-expiration-197648    | jenkins | v1.35.0 | 14 Apr 25 15:25 UTC | 14 Apr 25 15:27 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-508923 sudo                           | NoKubernetes-508923       | jenkins | v1.35.0 | 14 Apr 25 15:26 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-508923                                | NoKubernetes-508923       | jenkins | v1.35.0 | 14 Apr 25 15:26 UTC | 14 Apr 25 15:26 UTC |
	| start   | -p force-systemd-flag-470470                          | force-systemd-flag-470470 | jenkins | v1.35.0 | 14 Apr 25 15:26 UTC | 14 Apr 25 15:27 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-843870 stop                           | minikube                  | jenkins | v1.26.0 | 14 Apr 25 15:26 UTC | 14 Apr 25 15:26 UTC |
	| start   | -p stopped-upgrade-843870                             | stopped-upgrade-843870    | jenkins | v1.35.0 | 14 Apr 25 15:26 UTC | 14 Apr 25 15:27 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-470470 ssh cat                     | force-systemd-flag-470470 | jenkins | v1.35.0 | 14 Apr 25 15:27 UTC | 14 Apr 25 15:27 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-470470                          | force-systemd-flag-470470 | jenkins | v1.35.0 | 14 Apr 25 15:27 UTC | 14 Apr 25 15:27 UTC |
	| start   | -p cert-options-722854                                | cert-options-722854       | jenkins | v1.35.0 | 14 Apr 25 15:27 UTC | 14 Apr 25 15:28 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-843870                             | stopped-upgrade-843870    | jenkins | v1.35.0 | 14 Apr 25 15:27 UTC | 14 Apr 25 15:27 UTC |
	| start   | -p old-k8s-version-529869                             | old-k8s-version-529869    | jenkins | v1.35.0 | 14 Apr 25 15:27 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | cert-options-722854 ssh                               | cert-options-722854       | jenkins | v1.35.0 | 14 Apr 25 15:28 UTC | 14 Apr 25 15:28 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-722854 -- sudo                        | cert-options-722854       | jenkins | v1.35.0 | 14 Apr 25 15:28 UTC | 14 Apr 25 15:28 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-722854                                | cert-options-722854       | jenkins | v1.35.0 | 14 Apr 25 15:28 UTC | 14 Apr 25 15:28 UTC |
	| start   | -p no-preload-542791                                  | no-preload-542791         | jenkins | v1.35.0 | 14 Apr 25 15:28 UTC | 14 Apr 25 15:29 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-608146                          | kubernetes-upgrade-608146 | jenkins | v1.35.0 | 14 Apr 25 15:29 UTC | 14 Apr 25 15:29 UTC |
	| start   | -p kubernetes-upgrade-608146                          | kubernetes-upgrade-608146 | jenkins | v1.35.0 | 14 Apr 25 15:29 UTC | 14 Apr 25 15:30 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-542791            | no-preload-542791         | jenkins | v1.35.0 | 14 Apr 25 15:29 UTC | 14 Apr 25 15:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-542791                                  | no-preload-542791         | jenkins | v1.35.0 | 14 Apr 25 15:29 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| start   | -p cert-expiration-197648                             | cert-expiration-197648    | jenkins | v1.35.0 | 14 Apr 25 15:30 UTC |                     |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-608146                          | kubernetes-upgrade-608146 | jenkins | v1.35.0 | 14 Apr 25 15:30 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-608146                          | kubernetes-upgrade-608146 | jenkins | v1.35.0 | 14 Apr 25 15:30 UTC | 14 Apr 25 15:30 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 15:30:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 15:30:13.635077 1895690 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:30:13.635348 1895690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:30:13.635359 1895690 out.go:358] Setting ErrFile to fd 2...
	I0414 15:30:13.635363 1895690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:30:13.635551 1895690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:30:13.636129 1895690 out.go:352] Setting JSON to false
	I0414 15:30:13.637313 1895690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40358,"bootTime":1744604256,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:30:13.637376 1895690 start.go:139] virtualization: kvm guest
	I0414 15:30:13.639139 1895690 out.go:177] * [kubernetes-upgrade-608146] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:30:13.640303 1895690 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:30:13.640291 1895690 notify.go:220] Checking for updates...
	I0414 15:30:13.641545 1895690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:30:13.642579 1895690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:30:13.643746 1895690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:30:13.645965 1895690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:30:13.647087 1895690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:30:13.648445 1895690 config.go:182] Loaded profile config "kubernetes-upgrade-608146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:30:13.648911 1895690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:30:13.648987 1895690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:30:13.672345 1895690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I0414 15:30:13.672851 1895690 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:30:13.673391 1895690 main.go:141] libmachine: Using API Version  1
	I0414 15:30:13.673412 1895690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:30:13.673843 1895690 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:30:13.674074 1895690 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:30:13.674407 1895690 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:30:13.674851 1895690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:30:13.674923 1895690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:30:13.690826 1895690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
	I0414 15:30:13.691330 1895690 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:30:13.691900 1895690 main.go:141] libmachine: Using API Version  1
	I0414 15:30:13.691930 1895690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:30:13.692426 1895690 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:30:13.692672 1895690 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:30:13.729838 1895690 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 15:30:13.731083 1895690 start.go:297] selected driver: kvm2
	I0414 15:30:13.731100 1895690 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 C
lusterName:kubernetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:30:13.731202 1895690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:30:13.731998 1895690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:30:13.732094 1895690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:30:13.748530 1895690 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:30:13.748989 1895690 cni.go:84] Creating CNI manager for ""
	I0414 15:30:13.749054 1895690 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:30:13.749103 1895690 start.go:340] cluster config:
	{Name:kubernetes-upgrade-608146 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-608146 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:30:13.749232 1895690 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:30:13.751197 1895690 out.go:177] * Starting "kubernetes-upgrade-608146" primary control-plane node in "kubernetes-upgrade-608146" cluster
	I0414 15:30:13.752388 1895690 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:30:13.752435 1895690 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 15:30:13.752443 1895690 cache.go:56] Caching tarball of preloaded images
	I0414 15:30:13.752541 1895690 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:30:13.752553 1895690 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 15:30:13.752662 1895690 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kubernetes-upgrade-608146/config.json ...
	I0414 15:30:13.752859 1895690 start.go:360] acquireMachinesLock for kubernetes-upgrade-608146: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:30:13.752905 1895690 start.go:364] duration metric: took 26.082µs to acquireMachinesLock for "kubernetes-upgrade-608146"
	I0414 15:30:13.752917 1895690 start.go:96] Skipping create...Using existing machine configuration
	I0414 15:30:13.752925 1895690 fix.go:54] fixHost starting: 
	I0414 15:30:13.753195 1895690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:30:13.753227 1895690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:30:13.769131 1895690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45919
	I0414 15:30:13.769693 1895690 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:30:13.770174 1895690 main.go:141] libmachine: Using API Version  1
	I0414 15:30:13.770195 1895690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:30:13.770592 1895690 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:30:13.770830 1895690 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .DriverName
	I0414 15:30:13.771007 1895690 main.go:141] libmachine: (kubernetes-upgrade-608146) Calling .GetState
	I0414 15:30:13.772870 1895690 fix.go:112] recreateIfNeeded on kubernetes-upgrade-608146: state=Running err=<nil>
	W0414 15:30:13.772905 1895690 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 15:30:13.774883 1895690 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-608146" VM ...
	I0414 15:30:11.597969 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:30:11.695183 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:30:11.790228 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:30:11.873948 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/cert-expiration-197648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 15:30:12.014174 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/cert-expiration-197648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:30:12.129363 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/cert-expiration-197648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:30:12.166829 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/cert-expiration-197648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 15:30:12.203786 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:30:12.254490 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:30:12.306304 1895551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:30:12.341762 1895551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:30:12.404029 1895551 ssh_runner.go:195] Run: openssl version
	I0414 15:30:12.417707 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:30:12.451991 1895551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:30:12.463733 1895551 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:30:12.463867 1895551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:30:12.472648 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:30:12.489727 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:30:12.510462 1895551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:30:12.517254 1895551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:30:12.517312 1895551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:30:12.526192 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:30:12.541065 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:30:12.555315 1895551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:30:12.560875 1895551 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:30:12.560940 1895551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:30:12.567416 1895551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:30:12.579800 1895551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:30:12.585238 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 15:30:12.598751 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 15:30:12.605826 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 15:30:12.614021 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 15:30:12.621037 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 15:30:12.629064 1895551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 15:30:12.649810 1895551 kubeadm.go:392] StartCluster: {Name:cert-expiration-197648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:cert-expir
ation-197648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.6 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:30:12.649903 1895551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:30:12.649988 1895551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:30:12.753897 1895551 cri.go:89] found id: "db97b0392a5ead87e9b8386a39ee2eaff78c3ffe7cb35014dfd09cbcc90e6ca0"
	I0414 15:30:12.753912 1895551 cri.go:89] found id: "70f519d032d318b9f7de1436044d5c7512b94f18f74c511c69adfd8de8b6a2e8"
	I0414 15:30:12.753915 1895551 cri.go:89] found id: "43d36dd576bfaae60c9a584b476de3baebe310f5ba5d8413a3427043599648e7"
	I0414 15:30:12.753919 1895551 cri.go:89] found id: "640592cf4b5c6499e5290b70287bb3f8cd0319f6e38118dd04f6b699f1f53508"
	I0414 15:30:12.753922 1895551 cri.go:89] found id: "7a8e7d40960886f6def5f44e1e85ea4823266f156f1532a63c99fe3906f41253"
	I0414 15:30:12.753925 1895551 cri.go:89] found id: "6124813a4610462130896726eabd41103ee670351258387dc464e49ff7ebbecf"
	I0414 15:30:12.753929 1895551 cri.go:89] found id: "489e0fc245bfef21ed86507fcafb64fe8d9972c8251ae63a7441c830a064c615"
	I0414 15:30:12.753932 1895551 cri.go:89] found id: "6c8935a1560d605545b910d6f23bb6203c9ae836cf9e0202123163f96833e744"
	I0414 15:30:12.753935 1895551 cri.go:89] found id: "261b7becc0869a88a0ae79243f1767693ad239f3c98a0c841e7931af7c110417"
	I0414 15:30:12.753942 1895551 cri.go:89] found id: "a9243c6b314fc4f035ccb59da1600941ab9a06e2bf144dcd0b4bf2fa3734560d"
	I0414 15:30:12.753945 1895551 cri.go:89] found id: "bdbc99a6b14b308257777a219cdafdd09992079399eeb7b6cf97996a51b6c222"
	I0414 15:30:12.753948 1895551 cri.go:89] found id: "a780b21f750ff073516801f14d3be82b8319fb17ac1d74e3187ff1b5d4995c51"
	I0414 15:30:12.753951 1895551 cri.go:89] found id: "dd698f110dcdc7b53dc39983e6c43784c2b624854ffefd4b8e027a33e69e846b"
	I0414 15:30:12.753954 1895551 cri.go:89] found id: ""
	I0414 15:30:12.754008 1895551 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-608146 -n kubernetes-upgrade-608146
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-608146 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-js9p8 coredns-668d6bf9bc-ng4n5 kube-proxy-r9gld storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-608146 describe pod coredns-668d6bf9bc-js9p8 coredns-668d6bf9bc-ng4n5 kube-proxy-r9gld storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-608146 describe pod coredns-668d6bf9bc-js9p8 coredns-668d6bf9bc-ng4n5 kube-proxy-r9gld storage-provisioner: exit status 1 (94.207707ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-js9p8" not found
	Error from server (NotFound): pods "coredns-668d6bf9bc-ng4n5" not found
	Error from server (NotFound): pods "kube-proxy-r9gld" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-608146 describe pod coredns-668d6bf9bc-js9p8 coredns-668d6bf9bc-ng4n5 kube-proxy-r9gld storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-608146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-608146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-608146: (1.057072392s)
--- FAIL: TestKubernetesUpgrade (336.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (275.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 15:28:01.432851 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:28:18.359240 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.782397757s)

                                                
                                                
-- stdout --
	* [old-k8s-version-529869] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-529869" primary control-plane node in "old-k8s-version-529869" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:27:57.594172 1894281 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:27:57.594453 1894281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:27:57.594464 1894281 out.go:358] Setting ErrFile to fd 2...
	I0414 15:27:57.594469 1894281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:27:57.595117 1894281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:27:57.596097 1894281 out.go:352] Setting JSON to false
	I0414 15:27:57.597454 1894281 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40222,"bootTime":1744604256,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:27:57.597581 1894281 start.go:139] virtualization: kvm guest
	I0414 15:27:57.599475 1894281 out.go:177] * [old-k8s-version-529869] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:27:57.601177 1894281 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:27:57.601196 1894281 notify.go:220] Checking for updates...
	I0414 15:27:57.603766 1894281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:27:57.605253 1894281 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:27:57.606402 1894281 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:27:57.607564 1894281 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:27:57.608748 1894281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:27:57.610340 1894281 config.go:182] Loaded profile config "cert-expiration-197648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:27:57.610490 1894281 config.go:182] Loaded profile config "cert-options-722854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:27:57.610566 1894281 config.go:182] Loaded profile config "kubernetes-upgrade-608146": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:27:57.610669 1894281 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:27:57.649607 1894281 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:27:57.650844 1894281 start.go:297] selected driver: kvm2
	I0414 15:27:57.650859 1894281 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:27:57.650871 1894281 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:27:57.651605 1894281 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:27:57.651691 1894281 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:27:57.668625 1894281 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:27:57.668699 1894281 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 15:27:57.668947 1894281 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:27:57.668983 1894281 cni.go:84] Creating CNI manager for ""
	I0414 15:27:57.669021 1894281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:27:57.669030 1894281 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 15:27:57.669087 1894281 start.go:340] cluster config:
	{Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:27:57.669189 1894281 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:27:57.671023 1894281 out.go:177] * Starting "old-k8s-version-529869" primary control-plane node in "old-k8s-version-529869" cluster
	I0414 15:27:57.672186 1894281 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:27:57.672251 1894281 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 15:27:57.672265 1894281 cache.go:56] Caching tarball of preloaded images
	I0414 15:27:57.672358 1894281 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:27:57.672373 1894281 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 15:27:57.672480 1894281 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/config.json ...
	I0414 15:27:57.672506 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/config.json: {Name:mk47952f76e51c0b0a1564c9696d8b03612bd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:27:57.672695 1894281 start.go:360] acquireMachinesLock for old-k8s-version-529869: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:28:00.347953 1894281 start.go:364] duration metric: took 2.675212458s to acquireMachinesLock for "old-k8s-version-529869"
	I0414 15:28:00.348023 1894281 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:28:00.348229 1894281 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 15:28:00.351537 1894281 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0414 15:28:00.351917 1894281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:28:00.352000 1894281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:28:00.369616 1894281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0414 15:28:00.370143 1894281 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:28:00.370732 1894281 main.go:141] libmachine: Using API Version  1
	I0414 15:28:00.370759 1894281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:28:00.371182 1894281 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:28:00.371397 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:28:00.371573 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:00.371745 1894281 start.go:159] libmachine.API.Create for "old-k8s-version-529869" (driver="kvm2")
	I0414 15:28:00.371783 1894281 client.go:168] LocalClient.Create starting
	I0414 15:28:00.371832 1894281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 15:28:00.371880 1894281 main.go:141] libmachine: Decoding PEM data...
	I0414 15:28:00.371907 1894281 main.go:141] libmachine: Parsing certificate...
	I0414 15:28:00.372020 1894281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 15:28:00.372059 1894281 main.go:141] libmachine: Decoding PEM data...
	I0414 15:28:00.372078 1894281 main.go:141] libmachine: Parsing certificate...
	I0414 15:28:00.372104 1894281 main.go:141] libmachine: Running pre-create checks...
	I0414 15:28:00.372123 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .PreCreateCheck
	I0414 15:28:00.372531 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetConfigRaw
	I0414 15:28:00.373020 1894281 main.go:141] libmachine: Creating machine...
	I0414 15:28:00.373037 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .Create
	I0414 15:28:00.373207 1894281 main.go:141] libmachine: (old-k8s-version-529869) creating KVM machine...
	I0414 15:28:00.373229 1894281 main.go:141] libmachine: (old-k8s-version-529869) creating network...
	I0414 15:28:00.374843 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found existing default KVM network
	I0414 15:28:00.376135 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.375935 1894321 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:21:1c} reservation:<nil>}
	I0414 15:28:00.377642 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.377506 1894321 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028a310}
	I0414 15:28:00.377674 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | created network xml: 
	I0414 15:28:00.377686 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | <network>
	I0414 15:28:00.377693 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   <name>mk-old-k8s-version-529869</name>
	I0414 15:28:00.377703 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   <dns enable='no'/>
	I0414 15:28:00.377709 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   
	I0414 15:28:00.377718 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0414 15:28:00.377730 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |     <dhcp>
	I0414 15:28:00.377741 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0414 15:28:00.377752 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |     </dhcp>
	I0414 15:28:00.377761 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   </ip>
	I0414 15:28:00.377770 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG |   
	I0414 15:28:00.377778 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | </network>
	I0414 15:28:00.377787 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | 
	I0414 15:28:00.383756 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | trying to create private KVM network mk-old-k8s-version-529869 192.168.50.0/24...
	I0414 15:28:00.465557 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | private KVM network mk-old-k8s-version-529869 192.168.50.0/24 created
	I0414 15:28:00.465649 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.465517 1894321 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:28:00.465664 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869 ...
	I0414 15:28:00.465696 1894281 main.go:141] libmachine: (old-k8s-version-529869) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 15:28:00.465714 1894281 main.go:141] libmachine: (old-k8s-version-529869) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 15:28:00.773102 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.772870 1894321 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa...
	I0414 15:28:00.922516 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.922301 1894321 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/old-k8s-version-529869.rawdisk...
	I0414 15:28:00.922565 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Writing magic tar header
	I0414 15:28:00.922589 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Writing SSH key tar header
	I0414 15:28:00.922614 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:00.922501 1894321 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869 ...
	I0414 15:28:00.922651 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869
	I0414 15:28:00.922678 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869 (perms=drwx------)
	I0414 15:28:00.922710 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 15:28:00.922730 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 15:28:00.922749 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 15:28:00.922766 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 15:28:00.922787 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:28:00.922799 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 15:28:00.922847 1894281 main.go:141] libmachine: (old-k8s-version-529869) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 15:28:00.922870 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 15:28:00.922883 1894281 main.go:141] libmachine: (old-k8s-version-529869) creating domain...
	I0414 15:28:00.922902 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 15:28:00.922911 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home/jenkins
	I0414 15:28:00.922922 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | checking permissions on dir: /home
	I0414 15:28:00.922934 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | skipping /home - not owner
	I0414 15:28:00.924087 1894281 main.go:141] libmachine: (old-k8s-version-529869) define libvirt domain using xml: 
	I0414 15:28:00.924117 1894281 main.go:141] libmachine: (old-k8s-version-529869) <domain type='kvm'>
	I0414 15:28:00.924129 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <name>old-k8s-version-529869</name>
	I0414 15:28:00.924136 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <memory unit='MiB'>2200</memory>
	I0414 15:28:00.924144 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <vcpu>2</vcpu>
	I0414 15:28:00.924150 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <features>
	I0414 15:28:00.924158 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <acpi/>
	I0414 15:28:00.924184 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <apic/>
	I0414 15:28:00.924213 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <pae/>
	I0414 15:28:00.924229 1894281 main.go:141] libmachine: (old-k8s-version-529869)     
	I0414 15:28:00.924236 1894281 main.go:141] libmachine: (old-k8s-version-529869)   </features>
	I0414 15:28:00.924249 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <cpu mode='host-passthrough'>
	I0414 15:28:00.924262 1894281 main.go:141] libmachine: (old-k8s-version-529869)   
	I0414 15:28:00.924275 1894281 main.go:141] libmachine: (old-k8s-version-529869)   </cpu>
	I0414 15:28:00.924287 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <os>
	I0414 15:28:00.924297 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <type>hvm</type>
	I0414 15:28:00.924309 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <boot dev='cdrom'/>
	I0414 15:28:00.924317 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <boot dev='hd'/>
	I0414 15:28:00.924329 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <bootmenu enable='no'/>
	I0414 15:28:00.924340 1894281 main.go:141] libmachine: (old-k8s-version-529869)   </os>
	I0414 15:28:00.924375 1894281 main.go:141] libmachine: (old-k8s-version-529869)   <devices>
	I0414 15:28:00.924398 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <disk type='file' device='cdrom'>
	I0414 15:28:00.924414 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/boot2docker.iso'/>
	I0414 15:28:00.924424 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <target dev='hdc' bus='scsi'/>
	I0414 15:28:00.924433 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <readonly/>
	I0414 15:28:00.924444 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </disk>
	I0414 15:28:00.924457 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <disk type='file' device='disk'>
	I0414 15:28:00.924468 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 15:28:00.924493 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/old-k8s-version-529869.rawdisk'/>
	I0414 15:28:00.924504 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <target dev='hda' bus='virtio'/>
	I0414 15:28:00.924512 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </disk>
	I0414 15:28:00.924529 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <interface type='network'>
	I0414 15:28:00.924542 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <source network='mk-old-k8s-version-529869'/>
	I0414 15:28:00.924557 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <model type='virtio'/>
	I0414 15:28:00.924568 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </interface>
	I0414 15:28:00.924575 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <interface type='network'>
	I0414 15:28:00.924585 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <source network='default'/>
	I0414 15:28:00.924597 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <model type='virtio'/>
	I0414 15:28:00.924606 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </interface>
	I0414 15:28:00.924616 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <serial type='pty'>
	I0414 15:28:00.924624 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <target port='0'/>
	I0414 15:28:00.924633 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </serial>
	I0414 15:28:00.924667 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <console type='pty'>
	I0414 15:28:00.924705 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <target type='serial' port='0'/>
	I0414 15:28:00.924719 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </console>
	I0414 15:28:00.924729 1894281 main.go:141] libmachine: (old-k8s-version-529869)     <rng model='virtio'>
	I0414 15:28:00.924739 1894281 main.go:141] libmachine: (old-k8s-version-529869)       <backend model='random'>/dev/random</backend>
	I0414 15:28:00.924749 1894281 main.go:141] libmachine: (old-k8s-version-529869)     </rng>
	I0414 15:28:00.924757 1894281 main.go:141] libmachine: (old-k8s-version-529869)     
	I0414 15:28:00.924765 1894281 main.go:141] libmachine: (old-k8s-version-529869)     
	I0414 15:28:00.924773 1894281 main.go:141] libmachine: (old-k8s-version-529869)   </devices>
	I0414 15:28:00.924782 1894281 main.go:141] libmachine: (old-k8s-version-529869) </domain>
	I0414 15:28:00.924793 1894281 main.go:141] libmachine: (old-k8s-version-529869) 
	I0414 15:28:00.929437 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:30:67:9d in network default
	I0414 15:28:00.930251 1894281 main.go:141] libmachine: (old-k8s-version-529869) starting domain...
	I0414 15:28:00.930275 1894281 main.go:141] libmachine: (old-k8s-version-529869) ensuring networks are active...
	I0414 15:28:00.930284 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:00.931189 1894281 main.go:141] libmachine: (old-k8s-version-529869) Ensuring network default is active
	I0414 15:28:00.931551 1894281 main.go:141] libmachine: (old-k8s-version-529869) Ensuring network mk-old-k8s-version-529869 is active
	I0414 15:28:00.932190 1894281 main.go:141] libmachine: (old-k8s-version-529869) getting domain XML...
	I0414 15:28:00.933025 1894281 main.go:141] libmachine: (old-k8s-version-529869) creating domain...
	I0414 15:28:01.326753 1894281 main.go:141] libmachine: (old-k8s-version-529869) waiting for IP...
	I0414 15:28:01.327871 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:01.328341 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:01.328397 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:01.328344 1894321 retry.go:31] will retry after 253.337377ms: waiting for domain to come up
	I0414 15:28:01.582943 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:01.583524 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:01.583555 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:01.583500 1894321 retry.go:31] will retry after 334.781636ms: waiting for domain to come up
	I0414 15:28:01.920191 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:01.920827 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:01.920852 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:01.920796 1894321 retry.go:31] will retry after 313.16322ms: waiting for domain to come up
	I0414 15:28:02.235351 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:02.235986 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:02.236017 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:02.235948 1894321 retry.go:31] will retry after 367.594918ms: waiting for domain to come up
	I0414 15:28:02.605348 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:02.605964 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:02.606039 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:02.605949 1894321 retry.go:31] will retry after 612.424712ms: waiting for domain to come up
	I0414 15:28:03.219890 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:03.220489 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:03.220553 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:03.220460 1894321 retry.go:31] will retry after 750.226652ms: waiting for domain to come up
	I0414 15:28:03.972282 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:03.972967 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:03.973006 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:03.972898 1894321 retry.go:31] will retry after 1.135680292s: waiting for domain to come up
	I0414 15:28:05.110002 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:05.110606 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:05.110631 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:05.110563 1894321 retry.go:31] will retry after 1.225347627s: waiting for domain to come up
	I0414 15:28:06.337490 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:06.337975 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:06.338000 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:06.337944 1894321 retry.go:31] will retry after 1.570344958s: waiting for domain to come up
	I0414 15:28:07.910179 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:07.910781 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:07.910827 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:07.910737 1894321 retry.go:31] will retry after 1.520237138s: waiting for domain to come up
	I0414 15:28:09.432208 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:09.432687 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:09.432751 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:09.432676 1894321 retry.go:31] will retry after 2.340333413s: waiting for domain to come up
	I0414 15:28:11.775772 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:11.776390 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:11.776420 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:11.776292 1894321 retry.go:31] will retry after 2.752470915s: waiting for domain to come up
	I0414 15:28:14.532240 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:14.532751 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:14.532830 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:14.532760 1894321 retry.go:31] will retry after 3.234443762s: waiting for domain to come up
	I0414 15:28:17.769421 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:17.770009 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:28:17.770053 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:28:17.770004 1894321 retry.go:31] will retry after 4.866321854s: waiting for domain to come up
	I0414 15:28:22.639407 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:22.639848 1894281 main.go:141] libmachine: (old-k8s-version-529869) found domain IP: 192.168.50.117
	I0414 15:28:22.639877 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has current primary IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:22.639884 1894281 main.go:141] libmachine: (old-k8s-version-529869) reserving static IP address...
	I0414 15:28:22.640207 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-529869", mac: "52:54:00:20:2e:7f", ip: "192.168.50.117"} in network mk-old-k8s-version-529869
	I0414 15:28:22.725158 1894281 main.go:141] libmachine: (old-k8s-version-529869) reserved static IP address 192.168.50.117 for domain old-k8s-version-529869
	I0414 15:28:22.725195 1894281 main.go:141] libmachine: (old-k8s-version-529869) waiting for SSH...
	I0414 15:28:22.725206 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Getting to WaitForSSH function...
	I0414 15:28:22.728147 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:22.728534 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869
	I0414 15:28:22.728570 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find defined IP address of network mk-old-k8s-version-529869 interface with MAC address 52:54:00:20:2e:7f
	I0414 15:28:22.728685 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH client type: external
	I0414 15:28:22.728722 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa (-rw-------)
	I0414 15:28:22.728757 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:28:22.728774 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | About to run SSH command:
	I0414 15:28:22.728784 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | exit 0
	I0414 15:28:22.732782 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | SSH cmd err, output: exit status 255: 
	I0414 15:28:22.732831 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0414 15:28:22.732843 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | command : exit 0
	I0414 15:28:22.732851 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | err     : exit status 255
	I0414 15:28:22.732869 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | output  : 
	I0414 15:28:25.733188 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Getting to WaitForSSH function...
	I0414 15:28:25.736202 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.736562 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:25.736590 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.736652 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH client type: external
	I0414 15:28:25.736747 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa (-rw-------)
	I0414 15:28:25.736790 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:28:25.736805 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | About to run SSH command:
	I0414 15:28:25.736817 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | exit 0
	I0414 15:28:25.862656 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | SSH cmd err, output: <nil>: 
	I0414 15:28:25.863056 1894281 main.go:141] libmachine: (old-k8s-version-529869) KVM machine creation complete
	I0414 15:28:25.863408 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetConfigRaw
	I0414 15:28:25.864056 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:25.864316 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:25.864494 1894281 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:28:25.864510 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetState
	I0414 15:28:25.865781 1894281 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:28:25.865794 1894281 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:28:25.865805 1894281 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:28:25.865814 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:25.867966 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.868469 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:25.868499 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.868664 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:25.868854 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:25.869026 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:25.869189 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:25.869361 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:25.869588 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:25.869601 1894281 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:28:25.974023 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:28:25.974055 1894281 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:28:25.974066 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:25.978022 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.978480 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:25.978535 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:25.978714 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:25.978931 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:25.979134 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:25.979304 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:25.979484 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:25.979686 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:25.979696 1894281 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:28:26.087321 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:28:26.087474 1894281 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:28:26.087491 1894281 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:28:26.087503 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:28:26.087787 1894281 buildroot.go:166] provisioning hostname "old-k8s-version-529869"
	I0414 15:28:26.087814 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:28:26.087998 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:26.090895 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.091282 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:26.091306 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.091487 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:26.091694 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:26.091851 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:26.091976 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:26.092132 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:26.092350 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:26.092362 1894281 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-529869 && echo "old-k8s-version-529869" | sudo tee /etc/hostname
	I0414 15:28:26.213378 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-529869
	
	I0414 15:28:26.213409 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:26.216798 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.217259 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:26.217312 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.217515 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:26.217797 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:26.217998 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:26.218154 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:26.218356 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:26.218666 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:26.218699 1894281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-529869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-529869/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-529869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:28:26.336628 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:28:26.336662 1894281 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:28:26.336721 1894281 buildroot.go:174] setting up certificates
	I0414 15:28:26.336739 1894281 provision.go:84] configureAuth start
	I0414 15:28:26.336764 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:28:26.337120 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:28:26.340047 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.340471 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:26.340515 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.340651 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:26.343080 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.343419 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:26.343447 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:26.343573 1894281 provision.go:143] copyHostCerts
	I0414 15:28:26.343647 1894281 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:28:26.343675 1894281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:28:26.343748 1894281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:28:26.343883 1894281 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:28:26.343894 1894281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:28:26.343930 1894281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:28:26.344022 1894281 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:28:26.344032 1894281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:28:26.344061 1894281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:28:26.344144 1894281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-529869 san=[127.0.0.1 192.168.50.117 localhost minikube old-k8s-version-529869]
	I0414 15:28:27.039226 1894281 provision.go:177] copyRemoteCerts
	I0414 15:28:27.039296 1894281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:28:27.039323 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.041921 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.042195 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.042237 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.042409 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.042639 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.042828 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.043012 1894281 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:28:27.129439 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:28:27.157940 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 15:28:27.185372 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:28:27.212107 1894281 provision.go:87] duration metric: took 875.339015ms to configureAuth
	I0414 15:28:27.212139 1894281 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:28:27.212358 1894281 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:28:27.212453 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.215624 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.215990 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.216023 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.216202 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.216411 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.216613 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.216795 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.217001 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:27.217296 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:27.217329 1894281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:28:27.454649 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:28:27.454680 1894281 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:28:27.454692 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetURL
	I0414 15:28:27.456072 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | using libvirt version 6000000
	I0414 15:28:27.458149 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.458482 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.458516 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.458693 1894281 main.go:141] libmachine: Docker is up and running!
	I0414 15:28:27.458708 1894281 main.go:141] libmachine: Reticulating splines...
	I0414 15:28:27.458715 1894281 client.go:171] duration metric: took 27.086920596s to LocalClient.Create
	I0414 15:28:27.458740 1894281 start.go:167] duration metric: took 27.086999739s to libmachine.API.Create "old-k8s-version-529869"
	I0414 15:28:27.458749 1894281 start.go:293] postStartSetup for "old-k8s-version-529869" (driver="kvm2")
	I0414 15:28:27.458759 1894281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:28:27.458778 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:27.459061 1894281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:28:27.459096 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.460975 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.461271 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.461299 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.461434 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.461634 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.461815 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.461990 1894281 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:28:27.545819 1894281 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:28:27.550741 1894281 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:28:27.550770 1894281 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:28:27.550853 1894281 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:28:27.550938 1894281 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:28:27.551071 1894281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:28:27.561516 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:28:27.588219 1894281 start.go:296] duration metric: took 129.454006ms for postStartSetup
	I0414 15:28:27.588307 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetConfigRaw
	I0414 15:28:27.588965 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:28:27.591721 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.592157 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.592205 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.592435 1894281 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/config.json ...
	I0414 15:28:27.592695 1894281 start.go:128] duration metric: took 27.244434388s to createHost
	I0414 15:28:27.592739 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.594929 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.595307 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.595333 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.595524 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.595726 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.595935 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.596162 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.596338 1894281 main.go:141] libmachine: Using SSH client type: native
	I0414 15:28:27.596569 1894281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:28:27.596579 1894281 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:28:27.703451 1894281 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744644507.680014345
	
	I0414 15:28:27.703486 1894281 fix.go:216] guest clock: 1744644507.680014345
	I0414 15:28:27.703493 1894281 fix.go:229] Guest: 2025-04-14 15:28:27.680014345 +0000 UTC Remote: 2025-04-14 15:28:27.592710209 +0000 UTC m=+30.037434291 (delta=87.304136ms)
	I0414 15:28:27.703515 1894281 fix.go:200] guest clock delta is within tolerance: 87.304136ms
	I0414 15:28:27.703520 1894281 start.go:83] releasing machines lock for "old-k8s-version-529869", held for 27.355533829s
	I0414 15:28:27.703550 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:27.703943 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:28:27.707052 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.707398 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.707429 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.707591 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:27.708157 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:27.708349 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:28:27.708487 1894281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:28:27.708530 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.708593 1894281 ssh_runner.go:195] Run: cat /version.json
	I0414 15:28:27.708622 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:28:27.711653 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.711858 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.712047 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.712078 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.712196 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.712317 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:27.712347 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:27.712404 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.712524 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:28:27.712597 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.712706 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:28:27.712807 1894281 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:28:27.712867 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:28:27.712993 1894281 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:28:27.792516 1894281 ssh_runner.go:195] Run: systemctl --version
	I0414 15:28:27.820755 1894281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:28:27.986338 1894281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:28:27.992913 1894281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:28:27.993002 1894281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:28:28.010942 1894281 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:28:28.010974 1894281 start.go:495] detecting cgroup driver to use...
	I0414 15:28:28.011050 1894281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:28:28.029857 1894281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:28:28.048770 1894281 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:28:28.048858 1894281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:28:28.064381 1894281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:28:28.079955 1894281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:28:28.205055 1894281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:28:28.370985 1894281 docker.go:233] disabling docker service ...
	I0414 15:28:28.371076 1894281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:28:28.387122 1894281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:28:28.403547 1894281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:28:28.564342 1894281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:28:28.709786 1894281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:28:28.732631 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:28:28.754326 1894281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 15:28:28.754426 1894281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:28:28.766174 1894281 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:28:28.766320 1894281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:28:28.778180 1894281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:28:28.790832 1894281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:28:28.804576 1894281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:28:28.817754 1894281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:28:28.829400 1894281 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:28:28.829476 1894281 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:28:28.846270 1894281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:28:28.858449 1894281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:28:28.995659 1894281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:28:29.112492 1894281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:28:29.112585 1894281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:28:29.118562 1894281 start.go:563] Will wait 60s for crictl version
	I0414 15:28:29.118662 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:29.123309 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:28:29.166550 1894281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:28:29.166651 1894281 ssh_runner.go:195] Run: crio --version
	I0414 15:28:29.203635 1894281 ssh_runner.go:195] Run: crio --version
	I0414 15:28:29.237214 1894281 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 15:28:29.238637 1894281 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:28:29.241722 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:29.242155 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:28:15 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:28:29.242183 1894281 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:28:29.242477 1894281 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 15:28:29.247255 1894281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:28:29.261658 1894281 kubeadm.go:883] updating cluster {Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:28:29.261824 1894281 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:28:29.261869 1894281 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:28:29.303620 1894281 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:28:29.303709 1894281 ssh_runner.go:195] Run: which lz4
	I0414 15:28:29.308512 1894281 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:28:29.313802 1894281 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:28:29.313848 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 15:28:31.204777 1894281 crio.go:462] duration metric: took 1.896295137s to copy over tarball
	I0414 15:28:31.204878 1894281 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:28:34.026092 1894281 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.821182231s)
	I0414 15:28:34.026123 1894281 crio.go:469] duration metric: took 2.821303518s to extract the tarball
	I0414 15:28:34.026134 1894281 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:28:34.070485 1894281 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:28:34.136081 1894281 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:28:34.136112 1894281 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 15:28:34.136229 1894281 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.136324 1894281 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.136232 1894281 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:28:34.136712 1894281 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.136245 1894281 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.136245 1894281 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 15:28:34.136263 1894281 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.136263 1894281 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.139489 1894281 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:28:34.139524 1894281 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.139538 1894281 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.139621 1894281 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.139705 1894281 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.139621 1894281 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 15:28:34.139874 1894281 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.139980 1894281 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.281522 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.283708 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.284697 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.288600 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.292351 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.292705 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.300498 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 15:28:34.465285 1894281 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 15:28:34.465355 1894281 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.465419 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.472406 1894281 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 15:28:34.472488 1894281 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.472547 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.492328 1894281 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 15:28:34.492382 1894281 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.492441 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.496654 1894281 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 15:28:34.496702 1894281 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.496715 1894281 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 15:28:34.496753 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.496768 1894281 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.496771 1894281 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 15:28:34.496812 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.496822 1894281 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.496894 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.506400 1894281 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 15:28:34.506449 1894281 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 15:28:34.506455 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.506478 1894281 ssh_runner.go:195] Run: which crictl
	I0414 15:28:34.506524 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.506557 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.509507 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.509506 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.510285 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.643863 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.643923 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.643987 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.644031 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:28:34.648036 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.648111 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.648202 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.800014 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:28:34.800133 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:28:34.800161 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:28:34.800240 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:28:34.813315 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:28:34.825949 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:28:34.825979 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:28:34.917704 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 15:28:34.997520 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 15:28:34.997546 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 15:28:34.997642 1894281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:28:34.997701 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 15:28:34.998124 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 15:28:35.034393 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 15:28:35.034542 1894281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 15:28:35.097456 1894281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:28:35.246086 1894281 cache_images.go:92] duration metric: took 1.109950315s to LoadCachedImages
	W0414 15:28:35.246201 1894281 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0414 15:28:35.246220 1894281 kubeadm.go:934] updating node { 192.168.50.117 8443 v1.20.0 crio true true} ...
	I0414 15:28:35.246345 1894281 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-529869 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 15:28:35.246448 1894281 ssh_runner.go:195] Run: crio config
	I0414 15:28:35.296639 1894281 cni.go:84] Creating CNI manager for ""
	I0414 15:28:35.296668 1894281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:28:35.296680 1894281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:28:35.296702 1894281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.117 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-529869 NodeName:old-k8s-version-529869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 15:28:35.296879 1894281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-529869"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:28:35.296962 1894281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 15:28:35.309161 1894281 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:28:35.309280 1894281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:28:35.323267 1894281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 15:28:35.345127 1894281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:28:35.367459 1894281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 15:28:35.390232 1894281 ssh_runner.go:195] Run: grep 192.168.50.117	control-plane.minikube.internal$ /etc/hosts
	I0414 15:28:35.395539 1894281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:28:35.414011 1894281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:28:35.573098 1894281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:28:35.598084 1894281 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869 for IP: 192.168.50.117
	I0414 15:28:35.598118 1894281 certs.go:194] generating shared ca certs ...
	I0414 15:28:35.598142 1894281 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.598340 1894281 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:28:35.598440 1894281 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:28:35.598457 1894281 certs.go:256] generating profile certs ...
	I0414 15:28:35.598540 1894281 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.key
	I0414 15:28:35.598562 1894281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.crt with IP's: []
	I0414 15:28:35.663351 1894281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.crt ...
	I0414 15:28:35.663391 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.crt: {Name:mk0705c54c63714f5df412cdf2e934b1df412659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.663634 1894281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.key ...
	I0414 15:28:35.663653 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.key: {Name:mkf853f43a281c0985dc7bdae46d879cc2692cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.663794 1894281 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key.35259b7c
	I0414 15:28:35.663829 1894281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt.35259b7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.117]
	I0414 15:28:35.874995 1894281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt.35259b7c ...
	I0414 15:28:35.875032 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt.35259b7c: {Name:mkb0dfbad072bfd4f3afb0d5e18ea44b0cc975c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.875234 1894281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key.35259b7c ...
	I0414 15:28:35.875255 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key.35259b7c: {Name:mk5cb3ffbf7b5224612f9926928f46909b5e6e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.875359 1894281 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt.35259b7c -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt
	I0414 15:28:35.875488 1894281 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key.35259b7c -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key
	I0414 15:28:35.875585 1894281 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key
	I0414 15:28:35.875609 1894281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.crt with IP's: []
	I0414 15:28:35.960839 1894281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.crt ...
	I0414 15:28:35.960890 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.crt: {Name:mkee5e0db8c4dd4b3c741430f766f2de811cbcd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.961094 1894281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key ...
	I0414 15:28:35.961114 1894281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key: {Name:mkb015db91e0179ae66322e333ab039fc7e8bf73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:28:35.961291 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:28:35.961329 1894281 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:28:35.961339 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:28:35.961364 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:28:35.961387 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:28:35.961409 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:28:35.961446 1894281 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:28:35.962016 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:28:35.993351 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:28:36.023693 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:28:36.059033 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:28:36.087065 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 15:28:36.123921 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:28:36.155239 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:28:36.185043 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:28:36.212150 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:28:36.241136 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:28:36.270505 1894281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:28:36.299306 1894281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:28:36.319201 1894281 ssh_runner.go:195] Run: openssl version
	I0414 15:28:36.327068 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:28:36.340735 1894281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:28:36.346354 1894281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:28:36.346456 1894281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:28:36.353143 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:28:36.366294 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:28:36.379632 1894281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:28:36.385041 1894281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:28:36.385132 1894281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:28:36.392327 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:28:36.405498 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:28:36.419399 1894281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:28:36.425197 1894281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:28:36.425287 1894281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:28:36.431952 1894281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:28:36.444997 1894281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:28:36.450589 1894281 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:28:36.450672 1894281 kubeadm.go:392] StartCluster: {Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:28:36.450779 1894281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:28:36.450858 1894281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:28:36.509940 1894281 cri.go:89] found id: ""
	I0414 15:28:36.510038 1894281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:28:36.522002 1894281 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:28:36.539299 1894281 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:28:36.551027 1894281 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:28:36.551063 1894281 kubeadm.go:157] found existing configuration files:
	
	I0414 15:28:36.551139 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:28:36.561734 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:28:36.561826 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:28:36.576917 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:28:36.592228 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:28:36.592311 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:28:36.604740 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:28:36.614810 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:28:36.614901 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:28:36.625654 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:28:36.636503 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:28:36.636582 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:28:36.647360 1894281 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:28:36.972129 1894281 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:30:34.157217 1894281 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:30:34.157439 1894281 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:30:34.159997 1894281 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:30:34.160063 1894281 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:30:34.160151 1894281 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:30:34.160290 1894281 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:30:34.160395 1894281 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:30:34.160487 1894281 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:30:34.163062 1894281 out.go:235]   - Generating certificates and keys ...
	I0414 15:30:34.163152 1894281 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:30:34.163229 1894281 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:30:34.163308 1894281 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:30:34.163383 1894281 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:30:34.163462 1894281 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:30:34.163556 1894281 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:30:34.163637 1894281 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:30:34.163773 1894281 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0414 15:30:34.163858 1894281 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:30:34.163990 1894281 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	I0414 15:30:34.164084 1894281 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:30:34.164169 1894281 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:30:34.164245 1894281 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:30:34.164364 1894281 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:30:34.164438 1894281 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:30:34.164483 1894281 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:30:34.164555 1894281 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:30:34.164619 1894281 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:30:34.164762 1894281 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:30:34.164895 1894281 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:30:34.164944 1894281 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:30:34.165031 1894281 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:30:34.166677 1894281 out.go:235]   - Booting up control plane ...
	I0414 15:30:34.166796 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:30:34.166895 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:30:34.166986 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:30:34.167075 1894281 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:30:34.167256 1894281 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:30:34.167308 1894281 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:30:34.167414 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:30:34.167611 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:30:34.167685 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:30:34.167904 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:30:34.167972 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:30:34.168117 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:30:34.168184 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:30:34.168363 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:30:34.168463 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:30:34.168645 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:30:34.168659 1894281 kubeadm.go:310] 
	I0414 15:30:34.168724 1894281 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:30:34.168780 1894281 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:30:34.168797 1894281 kubeadm.go:310] 
	I0414 15:30:34.168843 1894281 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:30:34.168878 1894281 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:30:34.169012 1894281 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:30:34.169021 1894281 kubeadm.go:310] 
	I0414 15:30:34.169162 1894281 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:30:34.169205 1894281 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:30:34.169264 1894281 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:30:34.169274 1894281 kubeadm.go:310] 
	I0414 15:30:34.169443 1894281 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:30:34.169549 1894281 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:30:34.169561 1894281 kubeadm.go:310] 
	I0414 15:30:34.169817 1894281 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:30:34.169992 1894281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:30:34.170112 1894281 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:30:34.170223 1894281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:30:34.170288 1894281 kubeadm.go:310] 
	W0414 15:30:34.170426 1894281 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-529869] and IPs [192.168.50.117 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 15:30:34.170474 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 15:30:35.271657 1894281 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.101152607s)
	I0414 15:30:35.271754 1894281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:30:35.287441 1894281 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:30:35.300257 1894281 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:30:35.300285 1894281 kubeadm.go:157] found existing configuration files:
	
	I0414 15:30:35.300333 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:30:35.312518 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:30:35.312592 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:30:35.325047 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:30:35.335363 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:30:35.335449 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:30:35.346557 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:30:35.357110 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:30:35.357179 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:30:35.368232 1894281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:30:35.378984 1894281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:30:35.379075 1894281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:30:35.390662 1894281 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:30:35.479014 1894281 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:30:35.479101 1894281 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:30:35.640561 1894281 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:30:35.640728 1894281 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:30:35.640885 1894281 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:30:35.853776 1894281 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:30:35.855833 1894281 out.go:235]   - Generating certificates and keys ...
	I0414 15:30:35.855951 1894281 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:30:35.856056 1894281 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:30:35.856179 1894281 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 15:30:35.856300 1894281 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 15:30:35.856397 1894281 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 15:30:35.856475 1894281 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 15:30:35.856576 1894281 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 15:30:35.856671 1894281 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 15:30:35.856769 1894281 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 15:30:35.856879 1894281 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 15:30:35.856948 1894281 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 15:30:35.857031 1894281 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:30:35.957994 1894281 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:30:36.245236 1894281 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:30:36.321552 1894281 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:30:36.441440 1894281 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:30:36.460818 1894281 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:30:36.463242 1894281 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:30:36.463313 1894281 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:30:36.605866 1894281 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:30:36.608002 1894281 out.go:235]   - Booting up control plane ...
	I0414 15:30:36.608143 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:30:36.613221 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:30:36.614270 1894281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:30:36.615265 1894281 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:30:36.617522 1894281 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:31:16.621289 1894281 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:31:16.622186 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:31:16.622502 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:31:21.623539 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:31:21.623728 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:31:31.624589 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:31:31.624844 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:31:51.623379 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:31:51.623710 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:32:31.623937 1894281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:32:31.624178 1894281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:32:31.624204 1894281 kubeadm.go:310] 
	I0414 15:32:31.624284 1894281 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:32:31.624361 1894281 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:32:31.624371 1894281 kubeadm.go:310] 
	I0414 15:32:31.624421 1894281 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:32:31.624472 1894281 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:32:31.624637 1894281 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:32:31.624652 1894281 kubeadm.go:310] 
	I0414 15:32:31.624775 1894281 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:32:31.624824 1894281 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:32:31.624871 1894281 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:32:31.624881 1894281 kubeadm.go:310] 
	I0414 15:32:31.624994 1894281 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:32:31.625104 1894281 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:32:31.625116 1894281 kubeadm.go:310] 
	I0414 15:32:31.625306 1894281 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:32:31.625443 1894281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:32:31.625552 1894281 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:32:31.625667 1894281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:32:31.625681 1894281 kubeadm.go:310] 
	I0414 15:32:31.626398 1894281 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:32:31.626520 1894281 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:32:31.626616 1894281 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:32:31.626691 1894281 kubeadm.go:394] duration metric: took 3m55.176028745s to StartCluster
	I0414 15:32:31.626732 1894281 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:32:31.626872 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:32:31.676142 1894281 cri.go:89] found id: ""
	I0414 15:32:31.676174 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.676182 1894281 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:32:31.676190 1894281 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:32:31.676263 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:32:31.721931 1894281 cri.go:89] found id: ""
	I0414 15:32:31.721972 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.721984 1894281 logs.go:284] No container was found matching "etcd"
	I0414 15:32:31.721993 1894281 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:32:31.722074 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:32:31.772448 1894281 cri.go:89] found id: ""
	I0414 15:32:31.772482 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.772493 1894281 logs.go:284] No container was found matching "coredns"
	I0414 15:32:31.772502 1894281 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:32:31.772578 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:32:31.814106 1894281 cri.go:89] found id: ""
	I0414 15:32:31.814142 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.814152 1894281 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:32:31.814161 1894281 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:32:31.814232 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:32:31.861119 1894281 cri.go:89] found id: ""
	I0414 15:32:31.861146 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.861154 1894281 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:32:31.861167 1894281 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:32:31.861222 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:32:31.906262 1894281 cri.go:89] found id: ""
	I0414 15:32:31.906298 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.906309 1894281 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:32:31.906319 1894281 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:32:31.906410 1894281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:32:31.942875 1894281 cri.go:89] found id: ""
	I0414 15:32:31.942908 1894281 logs.go:282] 0 containers: []
	W0414 15:32:31.942919 1894281 logs.go:284] No container was found matching "kindnet"
	I0414 15:32:31.942933 1894281 logs.go:123] Gathering logs for kubelet ...
	I0414 15:32:31.942978 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:32:31.997809 1894281 logs.go:123] Gathering logs for dmesg ...
	I0414 15:32:31.997856 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:32:32.013500 1894281 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:32:32.013535 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:32:32.151306 1894281 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:32:32.151339 1894281 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:32:32.151360 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:32:32.269548 1894281 logs.go:123] Gathering logs for container status ...
	I0414 15:32:32.269597 1894281 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:32:32.317958 1894281 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:32:32.318034 1894281 out.go:270] * 
	* 
	W0414 15:32:32.318113 1894281 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:32:32.318138 1894281 out.go:270] * 
	* 
	W0414 15:32:32.319143 1894281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:32:32.322986 1894281 out.go:201] 
	W0414 15:32:32.324360 1894281 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:32:32.324419 1894281 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:32:32.324446 1894281 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:32:32.325963 1894281 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 6 (250.56397ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:32:32.626992 1897345 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-529869" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-529869" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (275.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-529869 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-529869 create -f testdata/busybox.yaml: exit status 1 (53.942589ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-529869" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-529869 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 6 (247.605281ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:32:32.931741 1897386 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-529869" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-529869" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 6 (239.927133ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:32:33.172256 1897415 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-529869" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-529869" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-529869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-529869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m52.821899457s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-529869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-529869 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-529869 describe deploy/metrics-server -n kube-system: exit status 1 (50.462843ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-529869" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-529869 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 6 (238.66062ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:34:26.283876 1898299 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-529869" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-529869" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (512.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0414 15:34:31.482545 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m30.290706165s)

                                                
                                                
-- stdout --
	* [old-k8s-version-529869] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-529869" primary control-plane node in "old-k8s-version-529869" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-529869" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:34:29.868421 1898413 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:34:29.868612 1898413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:34:29.868627 1898413 out.go:358] Setting ErrFile to fd 2...
	I0414 15:34:29.868634 1898413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:34:29.868939 1898413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:34:29.869778 1898413 out.go:352] Setting JSON to false
	I0414 15:34:29.871114 1898413 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40614,"bootTime":1744604256,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:34:29.871182 1898413 start.go:139] virtualization: kvm guest
	I0414 15:34:29.873431 1898413 out.go:177] * [old-k8s-version-529869] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:34:29.875011 1898413 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:34:29.875036 1898413 notify.go:220] Checking for updates...
	I0414 15:34:29.877306 1898413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:34:29.878524 1898413 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:34:29.879878 1898413 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:34:29.881661 1898413 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:34:29.883932 1898413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:34:29.886076 1898413 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:34:29.886569 1898413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:34:29.886655 1898413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:34:29.904399 1898413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44835
	I0414 15:34:29.905120 1898413 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:34:29.905815 1898413 main.go:141] libmachine: Using API Version  1
	I0414 15:34:29.905846 1898413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:34:29.906268 1898413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:34:29.906481 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:29.908506 1898413 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0414 15:34:29.909803 1898413 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:34:29.910284 1898413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:34:29.910344 1898413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:34:29.926354 1898413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0414 15:34:29.926898 1898413 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:34:29.927357 1898413 main.go:141] libmachine: Using API Version  1
	I0414 15:34:29.927376 1898413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:34:29.927716 1898413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:34:29.927920 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:29.968255 1898413 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 15:34:29.969709 1898413 start.go:297] selected driver: kvm2
	I0414 15:34:29.969731 1898413 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:34:29.969862 1898413 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:34:29.970664 1898413 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:34:29.970757 1898413 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:34:29.987800 1898413 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:34:29.988301 1898413 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:34:29.988351 1898413 cni.go:84] Creating CNI manager for ""
	I0414 15:34:29.988394 1898413 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:34:29.988428 1898413 start.go:340] cluster config:
	{Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:34:29.988581 1898413 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:34:29.990452 1898413 out.go:177] * Starting "old-k8s-version-529869" primary control-plane node in "old-k8s-version-529869" cluster
	I0414 15:34:29.991849 1898413 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:34:29.991908 1898413 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 15:34:29.991959 1898413 cache.go:56] Caching tarball of preloaded images
	I0414 15:34:29.992073 1898413 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:34:29.992090 1898413 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0414 15:34:29.992224 1898413 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/config.json ...
	I0414 15:34:29.992426 1898413 start.go:360] acquireMachinesLock for old-k8s-version-529869: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:34:29.992494 1898413 start.go:364] duration metric: took 49.128µs to acquireMachinesLock for "old-k8s-version-529869"
	I0414 15:34:29.992509 1898413 start.go:96] Skipping create...Using existing machine configuration
	I0414 15:34:29.992518 1898413 fix.go:54] fixHost starting: 
	I0414 15:34:29.992782 1898413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:34:29.992814 1898413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:34:30.009754 1898413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43463
	I0414 15:34:30.010264 1898413 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:34:30.010815 1898413 main.go:141] libmachine: Using API Version  1
	I0414 15:34:30.010856 1898413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:34:30.011262 1898413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:34:30.011468 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:30.011629 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetState
	I0414 15:34:30.013551 1898413 fix.go:112] recreateIfNeeded on old-k8s-version-529869: state=Stopped err=<nil>
	I0414 15:34:30.013588 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	W0414 15:34:30.013758 1898413 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 15:34:30.015705 1898413 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-529869" ...
	I0414 15:34:30.016776 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .Start
	I0414 15:34:30.016984 1898413 main.go:141] libmachine: (old-k8s-version-529869) starting domain...
	I0414 15:34:30.017006 1898413 main.go:141] libmachine: (old-k8s-version-529869) ensuring networks are active...
	I0414 15:34:30.017818 1898413 main.go:141] libmachine: (old-k8s-version-529869) Ensuring network default is active
	I0414 15:34:30.018280 1898413 main.go:141] libmachine: (old-k8s-version-529869) Ensuring network mk-old-k8s-version-529869 is active
	I0414 15:34:30.018659 1898413 main.go:141] libmachine: (old-k8s-version-529869) getting domain XML...
	I0414 15:34:30.019432 1898413 main.go:141] libmachine: (old-k8s-version-529869) creating domain...
	I0414 15:34:30.407182 1898413 main.go:141] libmachine: (old-k8s-version-529869) waiting for IP...
	I0414 15:34:30.408404 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:30.408890 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:30.409021 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:30.408890 1898447 retry.go:31] will retry after 201.744897ms: waiting for domain to come up
	I0414 15:34:30.612787 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:30.613450 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:30.613479 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:30.613425 1898447 retry.go:31] will retry after 357.705112ms: waiting for domain to come up
	I0414 15:34:30.973257 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:30.973851 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:30.973881 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:30.973817 1898447 retry.go:31] will retry after 446.461617ms: waiting for domain to come up
	I0414 15:34:31.422474 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:31.423112 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:31.423195 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:31.423106 1898447 retry.go:31] will retry after 437.486291ms: waiting for domain to come up
	I0414 15:34:31.862622 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:31.863246 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:31.863275 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:31.863210 1898447 retry.go:31] will retry after 707.706048ms: waiting for domain to come up
	I0414 15:34:32.572530 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:32.572976 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:32.573000 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:32.572953 1898447 retry.go:31] will retry after 875.847061ms: waiting for domain to come up
	I0414 15:34:33.450089 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:33.450591 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:33.450616 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:33.450552 1898447 retry.go:31] will retry after 907.015454ms: waiting for domain to come up
	I0414 15:34:34.359980 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:34.360595 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:34.360625 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:34.360536 1898447 retry.go:31] will retry after 1.449929892s: waiting for domain to come up
	I0414 15:34:35.812705 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:35.813242 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:35.813275 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:35.813211 1898447 retry.go:31] will retry after 1.26593141s: waiting for domain to come up
	I0414 15:34:37.080694 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:37.081234 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:37.081258 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:37.081198 1898447 retry.go:31] will retry after 2.182239345s: waiting for domain to come up
	I0414 15:34:39.265047 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:39.265702 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:39.265729 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:39.265662 1898447 retry.go:31] will retry after 2.159411469s: waiting for domain to come up
	I0414 15:34:41.427607 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:41.428237 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:41.428266 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:41.428198 1898447 retry.go:31] will retry after 3.051452563s: waiting for domain to come up
	I0414 15:34:44.482260 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:44.482870 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | unable to find current IP address of domain old-k8s-version-529869 in network mk-old-k8s-version-529869
	I0414 15:34:44.482899 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | I0414 15:34:44.482818 1898447 retry.go:31] will retry after 3.765949712s: waiting for domain to come up
	I0414 15:34:48.252325 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.252899 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has current primary IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.252912 1898413 main.go:141] libmachine: (old-k8s-version-529869) found domain IP: 192.168.50.117
	I0414 15:34:48.252920 1898413 main.go:141] libmachine: (old-k8s-version-529869) reserving static IP address...
	I0414 15:34:48.253396 1898413 main.go:141] libmachine: (old-k8s-version-529869) reserved static IP address 192.168.50.117 for domain old-k8s-version-529869
	I0414 15:34:48.253419 1898413 main.go:141] libmachine: (old-k8s-version-529869) waiting for SSH...
	I0414 15:34:48.253441 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "old-k8s-version-529869", mac: "52:54:00:20:2e:7f", ip: "192.168.50.117"} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.253475 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | skip adding static IP to network mk-old-k8s-version-529869 - found existing host DHCP lease matching {name: "old-k8s-version-529869", mac: "52:54:00:20:2e:7f", ip: "192.168.50.117"}
	I0414 15:34:48.253495 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | Getting to WaitForSSH function...
	I0414 15:34:48.255796 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.256171 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.256198 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.256365 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH client type: external
	I0414 15:34:48.256388 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa (-rw-------)
	I0414 15:34:48.256414 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.117 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:34:48.256419 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | About to run SSH command:
	I0414 15:34:48.256428 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | exit 0
	I0414 15:34:48.387056 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | SSH cmd err, output: <nil>: 
	I0414 15:34:48.387488 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetConfigRaw
	I0414 15:34:48.388247 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:34:48.391356 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.391772 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.391806 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.392131 1898413 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/config.json ...
	I0414 15:34:48.392358 1898413 machine.go:93] provisionDockerMachine start ...
	I0414 15:34:48.392387 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:48.392633 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:48.395042 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.395396 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.395445 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.395567 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:48.395834 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.396104 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.396276 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:48.396433 1898413 main.go:141] libmachine: Using SSH client type: native
	I0414 15:34:48.396761 1898413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:34:48.396784 1898413 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 15:34:48.511724 1898413 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 15:34:48.511765 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:34:48.512022 1898413 buildroot.go:166] provisioning hostname "old-k8s-version-529869"
	I0414 15:34:48.512047 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:34:48.512241 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:48.514901 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.515235 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.515260 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.515432 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:48.515625 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.515768 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.515891 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:48.516028 1898413 main.go:141] libmachine: Using SSH client type: native
	I0414 15:34:48.516321 1898413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:34:48.516335 1898413 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-529869 && echo "old-k8s-version-529869" | sudo tee /etc/hostname
	I0414 15:34:48.647518 1898413 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-529869
	
	I0414 15:34:48.647559 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:48.650674 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.651148 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.651184 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.651376 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:48.651589 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.651791 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:48.651977 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:48.652177 1898413 main.go:141] libmachine: Using SSH client type: native
	I0414 15:34:48.652432 1898413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:34:48.652449 1898413 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-529869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-529869/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-529869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:34:48.777801 1898413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:34:48.777846 1898413 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:34:48.777891 1898413 buildroot.go:174] setting up certificates
	I0414 15:34:48.777906 1898413 provision.go:84] configureAuth start
	I0414 15:34:48.777919 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetMachineName
	I0414 15:34:48.778251 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:34:48.781505 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.781886 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.781927 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.782078 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:48.785054 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.785400 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:48.785461 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:48.785580 1898413 provision.go:143] copyHostCerts
	I0414 15:34:48.785657 1898413 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:34:48.785673 1898413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:34:48.785742 1898413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:34:48.785892 1898413 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:34:48.785907 1898413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:34:48.785945 1898413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:34:48.786074 1898413 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:34:48.786087 1898413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:34:48.786118 1898413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:34:48.786210 1898413 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-529869 san=[127.0.0.1 192.168.50.117 localhost minikube old-k8s-version-529869]
	I0414 15:34:49.159887 1898413 provision.go:177] copyRemoteCerts
	I0414 15:34:49.159955 1898413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:34:49.159982 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.163206 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.163684 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.163729 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.163985 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.164198 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.164354 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.164464 1898413 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:34:49.257754 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:34:49.291994 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0414 15:34:49.321712 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 15:34:49.349981 1898413 provision.go:87] duration metric: took 572.062053ms to configureAuth
	I0414 15:34:49.350007 1898413 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:34:49.350231 1898413 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:34:49.350337 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.353433 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.353823 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.353847 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.354066 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.354278 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.354453 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.354621 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.354807 1898413 main.go:141] libmachine: Using SSH client type: native
	I0414 15:34:49.355074 1898413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:34:49.355093 1898413 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:34:49.600623 1898413 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:34:49.600680 1898413 machine.go:96] duration metric: took 1.208286201s to provisionDockerMachine
	I0414 15:34:49.600694 1898413 start.go:293] postStartSetup for "old-k8s-version-529869" (driver="kvm2")
	I0414 15:34:49.600708 1898413 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:34:49.600824 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:49.601198 1898413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:34:49.601224 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.604116 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.604480 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.604513 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.604660 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.604877 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.605055 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.605220 1898413 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:34:49.694028 1898413 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:34:49.698890 1898413 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:34:49.698929 1898413 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:34:49.699010 1898413 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:34:49.699101 1898413 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:34:49.699206 1898413 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:34:49.711135 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:34:49.738217 1898413 start.go:296] duration metric: took 137.504308ms for postStartSetup
	I0414 15:34:49.738266 1898413 fix.go:56] duration metric: took 19.74574832s for fixHost
	I0414 15:34:49.738311 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.740859 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.741210 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.741248 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.741400 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.741601 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.741765 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.741880 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.742009 1898413 main.go:141] libmachine: Using SSH client type: native
	I0414 15:34:49.742221 1898413 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.50.117 22 <nil> <nil>}
	I0414 15:34:49.742231 1898413 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:34:49.860410 1898413 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744644889.830675025
	
	I0414 15:34:49.860437 1898413 fix.go:216] guest clock: 1744644889.830675025
	I0414 15:34:49.860445 1898413 fix.go:229] Guest: 2025-04-14 15:34:49.830675025 +0000 UTC Remote: 2025-04-14 15:34:49.738270071 +0000 UTC m=+19.921289683 (delta=92.404954ms)
	I0414 15:34:49.860469 1898413 fix.go:200] guest clock delta is within tolerance: 92.404954ms
	I0414 15:34:49.860476 1898413 start.go:83] releasing machines lock for "old-k8s-version-529869", held for 19.86797202s
	I0414 15:34:49.860515 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:49.860852 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:34:49.863767 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.864230 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.864273 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.864452 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:49.865049 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:49.865238 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .DriverName
	I0414 15:34:49.865356 1898413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:34:49.865412 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.865477 1898413 ssh_runner.go:195] Run: cat /version.json
	I0414 15:34:49.865526 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHHostname
	I0414 15:34:49.868480 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.868700 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.868910 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.868936 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.869046 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.869187 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:49.869261 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.869277 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:49.869458 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHPort
	I0414 15:34:49.869462 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.869594 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHKeyPath
	I0414 15:34:49.869681 1898413 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:34:49.869704 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetSSHUsername
	I0414 15:34:49.869848 1898413 sshutil.go:53] new ssh client: &{IP:192.168.50.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/old-k8s-version-529869/id_rsa Username:docker}
	I0414 15:34:49.952028 1898413 ssh_runner.go:195] Run: systemctl --version
	I0414 15:34:49.979082 1898413 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:34:50.132609 1898413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:34:50.140466 1898413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:34:50.140590 1898413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:34:50.159688 1898413 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:34:50.159715 1898413 start.go:495] detecting cgroup driver to use...
	I0414 15:34:50.159793 1898413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:34:50.179127 1898413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:34:50.194053 1898413 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:34:50.194112 1898413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:34:50.212041 1898413 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:34:50.229955 1898413 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:34:50.359996 1898413 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:34:50.545033 1898413 docker.go:233] disabling docker service ...
	I0414 15:34:50.545143 1898413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:34:50.562094 1898413 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:34:50.576971 1898413 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:34:50.722954 1898413 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:34:50.883748 1898413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:34:50.899847 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:34:50.921182 1898413 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0414 15:34:50.921254 1898413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:34:50.934905 1898413 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:34:50.934989 1898413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:34:50.947033 1898413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:34:50.958671 1898413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:34:50.970425 1898413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:34:50.983858 1898413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:34:50.995423 1898413 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:34:50.995496 1898413 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:34:51.012135 1898413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:34:51.024786 1898413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:34:51.160914 1898413 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:34:51.263262 1898413 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:34:51.263353 1898413 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:34:51.268530 1898413 start.go:563] Will wait 60s for crictl version
	I0414 15:34:51.268595 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:51.273220 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:34:51.317845 1898413 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:34:51.317959 1898413 ssh_runner.go:195] Run: crio --version
	I0414 15:34:51.350092 1898413 ssh_runner.go:195] Run: crio --version
	I0414 15:34:51.385334 1898413 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0414 15:34:51.386567 1898413 main.go:141] libmachine: (old-k8s-version-529869) Calling .GetIP
	I0414 15:34:51.390075 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:51.390544 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:2e:7f", ip: ""} in network mk-old-k8s-version-529869: {Iface:virbr2 ExpiryTime:2025-04-14 16:34:42 +0000 UTC Type:0 Mac:52:54:00:20:2e:7f Iaid: IPaddr:192.168.50.117 Prefix:24 Hostname:old-k8s-version-529869 Clientid:01:52:54:00:20:2e:7f}
	I0414 15:34:51.390581 1898413 main.go:141] libmachine: (old-k8s-version-529869) DBG | domain old-k8s-version-529869 has defined IP address 192.168.50.117 and MAC address 52:54:00:20:2e:7f in network mk-old-k8s-version-529869
	I0414 15:34:51.390939 1898413 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0414 15:34:51.398583 1898413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:34:51.412912 1898413 kubeadm.go:883] updating cluster {Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:34:51.413025 1898413 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 15:34:51.413079 1898413 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:34:51.462693 1898413 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:34:51.462760 1898413 ssh_runner.go:195] Run: which lz4
	I0414 15:34:51.466990 1898413 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:34:51.471854 1898413 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:34:51.471890 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0414 15:34:53.303162 1898413 crio.go:462] duration metric: took 1.836209348s to copy over tarball
	I0414 15:34:53.303242 1898413 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:34:56.538742 1898413 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.235467799s)
	I0414 15:34:56.538789 1898413 crio.go:469] duration metric: took 3.23558497s to extract the tarball
	I0414 15:34:56.538800 1898413 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:34:56.584449 1898413 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:34:56.629535 1898413 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0414 15:34:56.629575 1898413 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0414 15:34:56.629683 1898413 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:34:56.629743 1898413 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:56.629763 1898413 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0414 15:34:56.629797 1898413 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:56.629684 1898413 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:56.629745 1898413 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:56.630086 1898413 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:56.630084 1898413 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0414 15:34:56.632281 1898413 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:56.632307 1898413 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:56.632293 1898413 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0414 15:34:56.632380 1898413 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:56.632350 1898413 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:34:56.632404 1898413 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:56.632755 1898413 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0414 15:34:56.632858 1898413 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:56.772982 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0414 15:34:56.786755 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:56.789036 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:56.789501 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:56.789698 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:56.791792 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:56.810765 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0414 15:34:56.891333 1898413 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0414 15:34:56.891394 1898413 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0414 15:34:56.891441 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:56.987942 1898413 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0414 15:34:56.988001 1898413 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:56.988057 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:56.999170 1898413 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0414 15:34:56.999237 1898413 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:56.999298 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:57.016043 1898413 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0414 15:34:57.016096 1898413 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0414 15:34:57.016107 1898413 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:57.016132 1898413 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:57.016153 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:57.016178 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:57.018408 1898413 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0414 15:34:57.018455 1898413 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:57.018492 1898413 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0414 15:34:57.018535 1898413 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0414 15:34:57.018578 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:57.018636 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:57.018692 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:57.018508 1898413 ssh_runner.go:195] Run: which crictl
	I0414 15:34:57.018777 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:34:57.026849 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:57.026876 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:57.146303 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:57.146330 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:34:57.146355 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:57.146411 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:57.155136 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:34:57.155164 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:57.159652 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:57.334808 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:34:57.334872 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0414 15:34:57.334939 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:57.334941 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0414 15:34:57.335026 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0414 15:34:57.335041 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0414 15:34:57.347852 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0414 15:34:57.467035 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0414 15:34:57.513010 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0414 15:34:57.513045 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0414 15:34:57.513165 1898413 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0414 15:34:57.513209 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0414 15:34:57.513235 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0414 15:34:57.513275 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0414 15:34:57.536077 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0414 15:34:57.565403 1898413 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0414 15:34:57.598790 1898413 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:34:57.744562 1898413 cache_images.go:92] duration metric: took 1.114960534s to LoadCachedImages
	W0414 15:34:57.744702 1898413 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0414 15:34:57.744722 1898413 kubeadm.go:934] updating node { 192.168.50.117 8443 v1.20.0 crio true true} ...
	I0414 15:34:57.744853 1898413 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-529869 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.117
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 15:34:57.744948 1898413 ssh_runner.go:195] Run: crio config
	I0414 15:34:57.801154 1898413 cni.go:84] Creating CNI manager for ""
	I0414 15:34:57.801181 1898413 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:34:57.801192 1898413 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:34:57.801210 1898413 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.117 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-529869 NodeName:old-k8s-version-529869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.117"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.117 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0414 15:34:57.801342 1898413 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.117
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-529869"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.117
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.117"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:34:57.801409 1898413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0414 15:34:57.812260 1898413 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:34:57.812352 1898413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:34:57.822630 1898413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0414 15:34:57.841200 1898413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:34:57.861772 1898413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0414 15:34:57.881847 1898413 ssh_runner.go:195] Run: grep 192.168.50.117	control-plane.minikube.internal$ /etc/hosts
	I0414 15:34:57.885970 1898413 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.117	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:34:57.899446 1898413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:34:58.038317 1898413 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:34:58.058328 1898413 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869 for IP: 192.168.50.117
	I0414 15:34:58.058351 1898413 certs.go:194] generating shared ca certs ...
	I0414 15:34:58.058381 1898413 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:34:58.058576 1898413 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:34:58.058646 1898413 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:34:58.058660 1898413 certs.go:256] generating profile certs ...
	I0414 15:34:58.058763 1898413 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/client.key
	I0414 15:34:58.058813 1898413 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key.35259b7c
	I0414 15:34:58.058852 1898413 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key
	I0414 15:34:58.058962 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:34:58.058992 1898413 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:34:58.058999 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:34:58.059022 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:34:58.059044 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:34:58.059065 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:34:58.059101 1898413 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:34:58.059672 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:34:58.116212 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:34:58.148461 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:34:58.187603 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:34:58.220584 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0414 15:34:58.253839 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:34:58.284510 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:34:58.316208 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/old-k8s-version-529869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:34:58.357080 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:34:58.386640 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:34:58.418666 1898413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:34:58.449436 1898413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:34:58.472042 1898413 ssh_runner.go:195] Run: openssl version
	I0414 15:34:58.480218 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:34:58.492993 1898413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:34:58.503301 1898413 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:34:58.503395 1898413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:34:58.510955 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:34:58.523979 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:34:58.536865 1898413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:34:58.543333 1898413 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:34:58.543399 1898413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:34:58.551420 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:34:58.565817 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:34:58.578627 1898413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:34:58.584514 1898413 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:34:58.584606 1898413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:34:58.591706 1898413 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:34:58.604419 1898413 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:34:58.609552 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0414 15:34:58.616081 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0414 15:34:58.622741 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0414 15:34:58.630122 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0414 15:34:58.636706 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0414 15:34:58.643589 1898413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0414 15:34:58.650599 1898413 kubeadm.go:392] StartCluster: {Name:old-k8s-version-529869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-529869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.117 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:34:58.650694 1898413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:34:58.650760 1898413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:34:58.692054 1898413 cri.go:89] found id: ""
	I0414 15:34:58.692152 1898413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:34:58.703760 1898413 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0414 15:34:58.703780 1898413 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0414 15:34:58.703828 1898413 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0414 15:34:58.715331 1898413 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0414 15:34:58.716061 1898413 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-529869" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:34:58.716418 1898413 kubeconfig.go:62] /home/jenkins/minikube-integration/20512-1845971/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-529869" cluster setting kubeconfig missing "old-k8s-version-529869" context setting]
	I0414 15:34:58.717043 1898413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:34:58.718540 1898413 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0414 15:34:58.730287 1898413 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.117
	I0414 15:34:58.730327 1898413 kubeadm.go:1160] stopping kube-system containers ...
	I0414 15:34:58.730342 1898413 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0414 15:34:58.730427 1898413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:34:58.774421 1898413 cri.go:89] found id: ""
	I0414 15:34:58.774518 1898413 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0414 15:34:58.792371 1898413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:34:58.804423 1898413 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:34:58.804452 1898413 kubeadm.go:157] found existing configuration files:
	
	I0414 15:34:58.804502 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:34:58.815337 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:34:58.815417 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:34:58.826569 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:34:58.836890 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:34:58.836971 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:34:58.850103 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:34:58.862436 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:34:58.862506 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:34:58.873415 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:34:58.884029 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:34:58.884129 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:34:58.895045 1898413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:34:58.906024 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:34:59.153216 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:34:59.872487 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:35:00.108025 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:35:00.253709 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0414 15:35:00.362281 1898413 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:35:00.362424 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:00.862522 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:01.362723 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:01.863420 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:02.362518 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:02.862478 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:03.363425 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:03.862609 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:04.362522 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:04.863460 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:05.363494 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:05.862561 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:06.363265 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:06.862870 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:07.362670 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:07.863469 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:08.363327 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:08.863427 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:09.362784 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:09.862968 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:10.362697 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:10.863119 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:11.363512 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:11.862904 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:12.362534 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:12.863383 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:13.362553 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:13.862982 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:14.363059 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:14.863430 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:15.363308 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:15.863306 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:16.362874 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:16.862553 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:17.362591 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:17.862505 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:18.362559 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:18.863387 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:19.363075 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:19.862516 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:20.362520 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:20.862488 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:21.363207 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:21.863401 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:22.363218 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:22.863158 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:23.363318 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:23.862519 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:24.362572 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:24.863120 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:25.362556 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:25.862538 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:26.362940 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:26.863344 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:27.362550 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:27.862675 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:28.363397 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:28.863385 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:29.362528 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:29.863432 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:30.363087 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:30.862985 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:31.362899 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:31.862531 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:32.362526 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:32.862594 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:33.363142 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:33.863375 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:34.363138 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:34.862541 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:35.362592 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:35.862540 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:36.363503 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:36.862564 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:37.362742 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:37.862569 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:38.363226 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:38.863352 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:39.363320 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:39.863058 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:40.362516 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:40.863341 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:41.363468 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:41.863285 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:42.363027 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:42.862950 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:43.362531 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:43.863126 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:44.363426 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:44.863526 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:45.362499 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:45.863093 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:46.363185 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:46.863065 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:47.362530 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:47.862990 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:48.362594 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:48.862978 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:49.363211 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:49.862553 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:50.363399 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:50.862530 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:51.363226 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:51.863320 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:52.363252 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:52.862568 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:53.362566 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:53.862487 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:54.362581 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:54.863494 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:55.362998 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:55.862544 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:56.363364 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:56.863407 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:57.362507 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:57.863381 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:58.363407 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:58.862757 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:59.363321 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:35:59.862556 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:00.363050 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:00.363153 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:00.413461 1898413 cri.go:89] found id: ""
	I0414 15:36:00.413497 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.413508 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:00.413518 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:00.413611 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:00.453858 1898413 cri.go:89] found id: ""
	I0414 15:36:00.453890 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.453903 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:00.453912 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:00.453983 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:00.493873 1898413 cri.go:89] found id: ""
	I0414 15:36:00.493903 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.493914 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:00.493922 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:00.493992 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:00.539392 1898413 cri.go:89] found id: ""
	I0414 15:36:00.539426 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.539438 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:00.539446 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:00.539518 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:00.581392 1898413 cri.go:89] found id: ""
	I0414 15:36:00.581420 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.581429 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:00.581435 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:00.581501 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:00.619057 1898413 cri.go:89] found id: ""
	I0414 15:36:00.619091 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.619100 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:00.619107 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:00.619163 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:00.659884 1898413 cri.go:89] found id: ""
	I0414 15:36:00.659918 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.659930 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:00.659937 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:00.660008 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:00.699068 1898413 cri.go:89] found id: ""
	I0414 15:36:00.699112 1898413 logs.go:282] 0 containers: []
	W0414 15:36:00.699124 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:00.699138 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:00.699154 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:00.845255 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:00.845286 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:00.845303 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:00.924021 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:00.924067 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:00.971289 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:00.971330 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:01.028828 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:01.028876 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:03.546552 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:03.566942 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:03.567021 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:03.605118 1898413 cri.go:89] found id: ""
	I0414 15:36:03.605156 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.605167 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:03.605175 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:03.605250 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:03.644396 1898413 cri.go:89] found id: ""
	I0414 15:36:03.644431 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.644440 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:03.644446 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:03.644505 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:03.685476 1898413 cri.go:89] found id: ""
	I0414 15:36:03.685507 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.685515 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:03.685521 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:03.685593 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:03.723780 1898413 cri.go:89] found id: ""
	I0414 15:36:03.723811 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.723820 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:03.723827 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:03.723886 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:03.761028 1898413 cri.go:89] found id: ""
	I0414 15:36:03.761057 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.761066 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:03.761072 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:03.761125 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:03.805453 1898413 cri.go:89] found id: ""
	I0414 15:36:03.805492 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.805504 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:03.805513 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:03.805587 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:03.843598 1898413 cri.go:89] found id: ""
	I0414 15:36:03.843638 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.843651 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:03.843659 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:03.843726 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:03.883779 1898413 cri.go:89] found id: ""
	I0414 15:36:03.883806 1898413 logs.go:282] 0 containers: []
	W0414 15:36:03.883814 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:03.883824 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:03.883842 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:03.938419 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:03.938466 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:03.952687 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:03.952721 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:04.035681 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:04.035707 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:04.035725 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:04.115805 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:04.115854 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:06.662587 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:06.677043 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:06.677110 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:06.716947 1898413 cri.go:89] found id: ""
	I0414 15:36:06.716984 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.716994 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:06.717002 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:06.717074 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:06.754653 1898413 cri.go:89] found id: ""
	I0414 15:36:06.754690 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.754701 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:06.754707 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:06.754778 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:06.791211 1898413 cri.go:89] found id: ""
	I0414 15:36:06.791241 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.791249 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:06.791255 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:06.791315 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:06.826604 1898413 cri.go:89] found id: ""
	I0414 15:36:06.826641 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.826651 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:06.826658 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:06.826730 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:06.867300 1898413 cri.go:89] found id: ""
	I0414 15:36:06.867326 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.867335 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:06.867341 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:06.867394 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:06.911336 1898413 cri.go:89] found id: ""
	I0414 15:36:06.911375 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.911387 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:06.911395 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:06.911482 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:06.947046 1898413 cri.go:89] found id: ""
	I0414 15:36:06.947079 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.947088 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:06.947094 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:06.947149 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:06.983289 1898413 cri.go:89] found id: ""
	I0414 15:36:06.983331 1898413 logs.go:282] 0 containers: []
	W0414 15:36:06.983342 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:06.983356 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:06.983372 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:07.070118 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:07.070166 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:07.116336 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:07.116383 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:07.176995 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:07.177043 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:07.192007 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:07.192044 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:07.276416 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:09.777151 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:09.790669 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:09.790742 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:09.827659 1898413 cri.go:89] found id: ""
	I0414 15:36:09.827687 1898413 logs.go:282] 0 containers: []
	W0414 15:36:09.827696 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:09.827702 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:09.827766 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:09.865151 1898413 cri.go:89] found id: ""
	I0414 15:36:09.865188 1898413 logs.go:282] 0 containers: []
	W0414 15:36:09.865197 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:09.865205 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:09.865261 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:09.905549 1898413 cri.go:89] found id: ""
	I0414 15:36:09.905585 1898413 logs.go:282] 0 containers: []
	W0414 15:36:09.905597 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:09.905606 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:09.905684 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:09.951768 1898413 cri.go:89] found id: ""
	I0414 15:36:09.951805 1898413 logs.go:282] 0 containers: []
	W0414 15:36:09.951817 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:09.951825 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:09.951897 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:09.992565 1898413 cri.go:89] found id: ""
	I0414 15:36:09.992599 1898413 logs.go:282] 0 containers: []
	W0414 15:36:09.992619 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:09.992626 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:09.992694 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:10.030615 1898413 cri.go:89] found id: ""
	I0414 15:36:10.030646 1898413 logs.go:282] 0 containers: []
	W0414 15:36:10.030659 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:10.030667 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:10.030742 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:10.069055 1898413 cri.go:89] found id: ""
	I0414 15:36:10.069088 1898413 logs.go:282] 0 containers: []
	W0414 15:36:10.069097 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:10.069104 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:10.069162 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:10.109328 1898413 cri.go:89] found id: ""
	I0414 15:36:10.109354 1898413 logs.go:282] 0 containers: []
	W0414 15:36:10.109362 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:10.109373 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:10.109384 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:10.165793 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:10.165854 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:10.181756 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:10.181802 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:10.264888 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:10.264912 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:10.264931 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:10.344222 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:10.344270 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:12.894264 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:12.908160 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:12.908229 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:12.942227 1898413 cri.go:89] found id: ""
	I0414 15:36:12.942262 1898413 logs.go:282] 0 containers: []
	W0414 15:36:12.942271 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:12.942278 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:12.942334 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:12.982087 1898413 cri.go:89] found id: ""
	I0414 15:36:12.982119 1898413 logs.go:282] 0 containers: []
	W0414 15:36:12.982128 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:12.982133 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:12.982187 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:13.018510 1898413 cri.go:89] found id: ""
	I0414 15:36:13.018547 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.018559 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:13.018568 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:13.018648 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:13.054432 1898413 cri.go:89] found id: ""
	I0414 15:36:13.054465 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.054476 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:13.054493 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:13.054571 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:13.093690 1898413 cri.go:89] found id: ""
	I0414 15:36:13.093726 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.093737 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:13.093745 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:13.093811 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:13.132698 1898413 cri.go:89] found id: ""
	I0414 15:36:13.132737 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.132748 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:13.132757 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:13.132865 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:13.171078 1898413 cri.go:89] found id: ""
	I0414 15:36:13.171111 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.171121 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:13.171126 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:13.171201 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:13.207915 1898413 cri.go:89] found id: ""
	I0414 15:36:13.207954 1898413 logs.go:282] 0 containers: []
	W0414 15:36:13.207966 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:13.207979 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:13.207996 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:13.263260 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:13.263305 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:13.278663 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:13.278707 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:13.359617 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:13.359644 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:13.359684 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:13.442655 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:13.442710 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:15.986524 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:16.000577 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:16.000672 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:16.040313 1898413 cri.go:89] found id: ""
	I0414 15:36:16.040348 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.040357 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:16.040365 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:16.040438 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:16.079602 1898413 cri.go:89] found id: ""
	I0414 15:36:16.079629 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.079636 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:16.079642 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:16.079706 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:16.117839 1898413 cri.go:89] found id: ""
	I0414 15:36:16.117882 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.117893 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:16.117900 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:16.117955 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:16.157631 1898413 cri.go:89] found id: ""
	I0414 15:36:16.157669 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.157683 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:16.157691 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:16.157765 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:16.201193 1898413 cri.go:89] found id: ""
	I0414 15:36:16.201221 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.201229 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:16.201236 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:16.201306 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:16.239699 1898413 cri.go:89] found id: ""
	I0414 15:36:16.239728 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.239736 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:16.239742 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:16.239794 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:16.281053 1898413 cri.go:89] found id: ""
	I0414 15:36:16.281081 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.281091 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:16.281098 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:16.281167 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:16.326273 1898413 cri.go:89] found id: ""
	I0414 15:36:16.326306 1898413 logs.go:282] 0 containers: []
	W0414 15:36:16.326315 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:16.326326 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:16.326338 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:16.380455 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:16.380498 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:16.396214 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:16.396251 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:16.478189 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:16.478222 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:16.478235 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:16.556913 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:16.556951 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:19.103059 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:19.116646 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:19.116735 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:19.152975 1898413 cri.go:89] found id: ""
	I0414 15:36:19.153003 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.153012 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:19.153019 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:19.153088 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:19.189024 1898413 cri.go:89] found id: ""
	I0414 15:36:19.189061 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.189073 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:19.189081 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:19.189150 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:19.227313 1898413 cri.go:89] found id: ""
	I0414 15:36:19.227345 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.227356 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:19.227363 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:19.227436 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:19.271058 1898413 cri.go:89] found id: ""
	I0414 15:36:19.271089 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.271097 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:19.271104 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:19.271164 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:19.310059 1898413 cri.go:89] found id: ""
	I0414 15:36:19.310086 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.310094 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:19.310100 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:19.310178 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:19.347062 1898413 cri.go:89] found id: ""
	I0414 15:36:19.347089 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.347100 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:19.347108 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:19.347167 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:19.387764 1898413 cri.go:89] found id: ""
	I0414 15:36:19.387804 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.387817 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:19.387830 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:19.387893 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:19.425304 1898413 cri.go:89] found id: ""
	I0414 15:36:19.425348 1898413 logs.go:282] 0 containers: []
	W0414 15:36:19.425360 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:19.425374 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:19.425390 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:19.478255 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:19.478302 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:19.494043 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:19.494081 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:19.573747 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:19.573775 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:19.573793 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:19.654777 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:19.654827 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:22.206593 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:22.221909 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:22.221980 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:22.256678 1898413 cri.go:89] found id: ""
	I0414 15:36:22.256717 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.256730 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:22.256739 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:22.256875 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:22.300623 1898413 cri.go:89] found id: ""
	I0414 15:36:22.300657 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.300669 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:22.300677 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:22.300744 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:22.344247 1898413 cri.go:89] found id: ""
	I0414 15:36:22.344275 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.344286 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:22.344294 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:22.344358 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:22.383522 1898413 cri.go:89] found id: ""
	I0414 15:36:22.383555 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.383568 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:22.383574 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:22.383637 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:22.422336 1898413 cri.go:89] found id: ""
	I0414 15:36:22.422391 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.422405 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:22.422413 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:22.422492 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:22.458574 1898413 cri.go:89] found id: ""
	I0414 15:36:22.458602 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.458610 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:22.458616 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:22.458684 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:22.498175 1898413 cri.go:89] found id: ""
	I0414 15:36:22.498214 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.498226 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:22.498234 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:22.498305 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:22.534910 1898413 cri.go:89] found id: ""
	I0414 15:36:22.534943 1898413 logs.go:282] 0 containers: []
	W0414 15:36:22.534958 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:22.534970 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:22.534986 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:22.582470 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:22.582514 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:22.637226 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:22.637276 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:22.652650 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:22.652682 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:22.734569 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:22.734598 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:22.734613 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:25.318276 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:25.332643 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:25.332730 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:25.374326 1898413 cri.go:89] found id: ""
	I0414 15:36:25.374386 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.374399 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:25.374408 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:25.374475 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:25.411097 1898413 cri.go:89] found id: ""
	I0414 15:36:25.411127 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.411137 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:25.411143 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:25.411208 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:25.458263 1898413 cri.go:89] found id: ""
	I0414 15:36:25.458294 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.458305 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:25.458312 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:25.458409 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:25.506041 1898413 cri.go:89] found id: ""
	I0414 15:36:25.506084 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.506098 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:25.506106 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:25.506179 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:25.552746 1898413 cri.go:89] found id: ""
	I0414 15:36:25.552840 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.552865 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:25.552883 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:25.552955 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:25.604078 1898413 cri.go:89] found id: ""
	I0414 15:36:25.604107 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.604115 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:25.604122 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:25.604177 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:25.642599 1898413 cri.go:89] found id: ""
	I0414 15:36:25.642638 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.642650 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:25.642659 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:25.642724 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:25.685833 1898413 cri.go:89] found id: ""
	I0414 15:36:25.685861 1898413 logs.go:282] 0 containers: []
	W0414 15:36:25.685870 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:25.685880 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:25.685898 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:25.756100 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:25.756144 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:25.771124 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:25.771159 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:25.866440 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:25.866465 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:25.866483 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:25.953491 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:25.953538 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:28.501817 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:28.516194 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:28.516310 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:28.554341 1898413 cri.go:89] found id: ""
	I0414 15:36:28.554390 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.554403 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:28.554413 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:28.554482 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:28.595485 1898413 cri.go:89] found id: ""
	I0414 15:36:28.595521 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.595534 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:28.595542 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:28.595631 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:28.635895 1898413 cri.go:89] found id: ""
	I0414 15:36:28.635926 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.635934 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:28.635940 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:28.635994 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:28.673424 1898413 cri.go:89] found id: ""
	I0414 15:36:28.673463 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.673476 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:28.673483 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:28.673567 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:28.715094 1898413 cri.go:89] found id: ""
	I0414 15:36:28.715135 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.715147 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:28.715155 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:28.715227 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:28.756233 1898413 cri.go:89] found id: ""
	I0414 15:36:28.756263 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.756272 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:28.756279 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:28.756346 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:28.794125 1898413 cri.go:89] found id: ""
	I0414 15:36:28.794170 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.794182 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:28.794190 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:28.794257 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:28.832793 1898413 cri.go:89] found id: ""
	I0414 15:36:28.832823 1898413 logs.go:282] 0 containers: []
	W0414 15:36:28.832839 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:28.832849 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:28.832865 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:28.912170 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:28.912197 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:28.912212 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:28.993449 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:28.993496 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:29.033760 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:29.033789 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:29.088385 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:29.088433 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:31.606050 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:31.619960 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:31.620039 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:31.657276 1898413 cri.go:89] found id: ""
	I0414 15:36:31.657351 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.657366 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:31.657375 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:31.657451 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:31.695623 1898413 cri.go:89] found id: ""
	I0414 15:36:31.695667 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.695680 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:31.695688 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:31.695771 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:31.736467 1898413 cri.go:89] found id: ""
	I0414 15:36:31.736516 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.736529 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:31.736537 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:31.736625 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:31.773766 1898413 cri.go:89] found id: ""
	I0414 15:36:31.773801 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.773813 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:31.773821 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:31.773889 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:31.811891 1898413 cri.go:89] found id: ""
	I0414 15:36:31.811924 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.811937 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:31.811944 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:31.812015 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:31.850451 1898413 cri.go:89] found id: ""
	I0414 15:36:31.850491 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.850503 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:31.850514 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:31.850583 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:31.885682 1898413 cri.go:89] found id: ""
	I0414 15:36:31.885719 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.885728 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:31.885735 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:31.885793 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:31.924462 1898413 cri.go:89] found id: ""
	I0414 15:36:31.924491 1898413 logs.go:282] 0 containers: []
	W0414 15:36:31.924501 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:31.924515 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:31.924542 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:31.940464 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:31.940494 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:32.013917 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:32.013941 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:32.013958 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:32.104102 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:32.104150 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:32.152188 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:32.152231 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:34.710557 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:34.725920 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:34.726002 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:34.770192 1898413 cri.go:89] found id: ""
	I0414 15:36:34.770227 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.770239 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:34.770248 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:34.770321 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:34.805759 1898413 cri.go:89] found id: ""
	I0414 15:36:34.805795 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.805808 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:34.805816 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:34.805896 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:34.842417 1898413 cri.go:89] found id: ""
	I0414 15:36:34.842451 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.842463 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:34.842471 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:34.842531 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:34.883888 1898413 cri.go:89] found id: ""
	I0414 15:36:34.883916 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.883924 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:34.883930 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:34.883984 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:34.920967 1898413 cri.go:89] found id: ""
	I0414 15:36:34.920999 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.921007 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:34.921013 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:34.921066 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:34.969611 1898413 cri.go:89] found id: ""
	I0414 15:36:34.969651 1898413 logs.go:282] 0 containers: []
	W0414 15:36:34.969662 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:34.969669 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:34.969737 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:35.010349 1898413 cri.go:89] found id: ""
	I0414 15:36:35.010412 1898413 logs.go:282] 0 containers: []
	W0414 15:36:35.010425 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:35.010433 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:35.010506 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:35.052980 1898413 cri.go:89] found id: ""
	I0414 15:36:35.053009 1898413 logs.go:282] 0 containers: []
	W0414 15:36:35.053020 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:35.053031 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:35.053045 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:35.108431 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:35.108475 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:35.124110 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:35.124144 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:35.198411 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:35.198437 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:35.198450 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:35.277821 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:35.277867 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:37.821419 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:37.836700 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:37.836775 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:37.876584 1898413 cri.go:89] found id: ""
	I0414 15:36:37.876616 1898413 logs.go:282] 0 containers: []
	W0414 15:36:37.876628 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:37.876636 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:37.876706 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:37.915146 1898413 cri.go:89] found id: ""
	I0414 15:36:37.915174 1898413 logs.go:282] 0 containers: []
	W0414 15:36:37.915189 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:37.915195 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:37.915260 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:37.957688 1898413 cri.go:89] found id: ""
	I0414 15:36:37.957716 1898413 logs.go:282] 0 containers: []
	W0414 15:36:37.957727 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:37.957735 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:37.957792 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:38.001632 1898413 cri.go:89] found id: ""
	I0414 15:36:38.001663 1898413 logs.go:282] 0 containers: []
	W0414 15:36:38.001674 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:38.001683 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:38.001748 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:38.052722 1898413 cri.go:89] found id: ""
	I0414 15:36:38.052752 1898413 logs.go:282] 0 containers: []
	W0414 15:36:38.052764 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:38.052771 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:38.052839 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:38.103947 1898413 cri.go:89] found id: ""
	I0414 15:36:38.103983 1898413 logs.go:282] 0 containers: []
	W0414 15:36:38.103994 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:38.104004 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:38.104075 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:38.148773 1898413 cri.go:89] found id: ""
	I0414 15:36:38.148811 1898413 logs.go:282] 0 containers: []
	W0414 15:36:38.148824 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:38.148831 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:38.148909 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:38.186597 1898413 cri.go:89] found id: ""
	I0414 15:36:38.186639 1898413 logs.go:282] 0 containers: []
	W0414 15:36:38.186652 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:38.186666 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:38.186687 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:38.202083 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:38.202120 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:38.281438 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:38.281465 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:38.281478 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:38.363679 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:38.363735 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:38.407444 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:38.407484 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:40.962496 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:40.976099 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:40.976183 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:41.014193 1898413 cri.go:89] found id: ""
	I0414 15:36:41.014245 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.014254 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:41.014260 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:41.014325 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:41.050548 1898413 cri.go:89] found id: ""
	I0414 15:36:41.050589 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.050602 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:41.050610 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:41.050683 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:41.088481 1898413 cri.go:89] found id: ""
	I0414 15:36:41.088514 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.088528 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:41.088534 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:41.088606 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:41.126580 1898413 cri.go:89] found id: ""
	I0414 15:36:41.126612 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.126624 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:41.126632 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:41.126698 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:41.164490 1898413 cri.go:89] found id: ""
	I0414 15:36:41.164523 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.164535 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:41.164543 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:41.164618 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:41.200541 1898413 cri.go:89] found id: ""
	I0414 15:36:41.200582 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.200595 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:41.200603 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:41.200686 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:41.244276 1898413 cri.go:89] found id: ""
	I0414 15:36:41.244320 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.244328 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:41.244334 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:41.244403 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:41.285836 1898413 cri.go:89] found id: ""
	I0414 15:36:41.285875 1898413 logs.go:282] 0 containers: []
	W0414 15:36:41.285888 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:41.285904 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:41.285920 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:41.337714 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:41.337762 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:41.352676 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:41.352709 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:41.430249 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:41.430277 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:41.430291 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:41.508195 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:41.508240 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:44.058517 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:44.072875 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:44.072949 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:44.111484 1898413 cri.go:89] found id: ""
	I0414 15:36:44.111516 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.111524 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:44.111530 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:44.111596 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:44.147742 1898413 cri.go:89] found id: ""
	I0414 15:36:44.147791 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.147805 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:44.147814 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:44.147896 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:44.188652 1898413 cri.go:89] found id: ""
	I0414 15:36:44.188694 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.188707 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:44.188715 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:44.188793 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:44.230487 1898413 cri.go:89] found id: ""
	I0414 15:36:44.230517 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.230524 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:44.230530 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:44.230596 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:44.277149 1898413 cri.go:89] found id: ""
	I0414 15:36:44.277181 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.277192 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:44.277200 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:44.277270 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:44.315518 1898413 cri.go:89] found id: ""
	I0414 15:36:44.315549 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.315558 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:44.315565 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:44.315622 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:44.352702 1898413 cri.go:89] found id: ""
	I0414 15:36:44.352732 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.352743 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:44.352751 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:44.352811 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:44.390849 1898413 cri.go:89] found id: ""
	I0414 15:36:44.390881 1898413 logs.go:282] 0 containers: []
	W0414 15:36:44.390894 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:44.390907 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:44.390923 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:44.467286 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:44.467315 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:44.467332 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:44.554719 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:44.554760 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:44.597541 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:44.597578 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:44.653981 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:44.654027 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:47.170518 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:47.185650 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:47.185721 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:47.223720 1898413 cri.go:89] found id: ""
	I0414 15:36:47.223756 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.223765 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:47.223771 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:47.223844 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:47.262516 1898413 cri.go:89] found id: ""
	I0414 15:36:47.262553 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.262565 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:47.262574 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:47.262643 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:47.300344 1898413 cri.go:89] found id: ""
	I0414 15:36:47.300377 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.300395 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:47.300404 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:47.300482 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:47.341023 1898413 cri.go:89] found id: ""
	I0414 15:36:47.341056 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.341068 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:47.341076 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:47.341144 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:47.377549 1898413 cri.go:89] found id: ""
	I0414 15:36:47.377582 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.377593 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:47.377601 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:47.377674 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:47.415660 1898413 cri.go:89] found id: ""
	I0414 15:36:47.415688 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.415696 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:47.415703 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:47.415762 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:47.458573 1898413 cri.go:89] found id: ""
	I0414 15:36:47.458612 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.458623 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:47.458632 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:47.458705 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:47.498163 1898413 cri.go:89] found id: ""
	I0414 15:36:47.498203 1898413 logs.go:282] 0 containers: []
	W0414 15:36:47.498215 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:47.498228 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:47.498243 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:47.552232 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:47.552271 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:47.568822 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:47.568857 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:47.644201 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:47.644233 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:47.644251 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:47.732164 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:47.732204 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:50.283776 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:50.299209 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:50.299287 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:50.350020 1898413 cri.go:89] found id: ""
	I0414 15:36:50.350061 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.350074 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:50.350085 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:50.350152 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:50.391166 1898413 cri.go:89] found id: ""
	I0414 15:36:50.391204 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.391217 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:50.391225 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:50.391298 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:50.428063 1898413 cri.go:89] found id: ""
	I0414 15:36:50.428099 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.428110 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:50.428119 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:50.428185 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:50.468180 1898413 cri.go:89] found id: ""
	I0414 15:36:50.468208 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.468216 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:50.468222 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:50.468281 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:50.512024 1898413 cri.go:89] found id: ""
	I0414 15:36:50.512050 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.512069 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:50.512077 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:50.512147 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:50.551410 1898413 cri.go:89] found id: ""
	I0414 15:36:50.551443 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.551455 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:50.551463 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:50.551528 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:50.598220 1898413 cri.go:89] found id: ""
	I0414 15:36:50.598247 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.598256 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:50.598262 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:50.598318 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:50.633583 1898413 cri.go:89] found id: ""
	I0414 15:36:50.633616 1898413 logs.go:282] 0 containers: []
	W0414 15:36:50.633624 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:50.633634 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:50.633647 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:50.691939 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:50.691981 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:50.707624 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:50.707656 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:50.794814 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:50.794845 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:50.794861 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:50.884271 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:50.884314 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:53.429394 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:53.450093 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:53.450163 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:53.493058 1898413 cri.go:89] found id: ""
	I0414 15:36:53.493088 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.493100 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:53.493109 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:53.493164 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:53.530926 1898413 cri.go:89] found id: ""
	I0414 15:36:53.530965 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.530977 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:53.530985 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:53.531047 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:53.569577 1898413 cri.go:89] found id: ""
	I0414 15:36:53.569605 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.569616 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:53.569624 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:53.569682 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:53.619672 1898413 cri.go:89] found id: ""
	I0414 15:36:53.619707 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.619719 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:53.619728 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:53.619792 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:53.664503 1898413 cri.go:89] found id: ""
	I0414 15:36:53.664529 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.664536 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:53.664542 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:53.664592 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:53.708726 1898413 cri.go:89] found id: ""
	I0414 15:36:53.708758 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.708767 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:53.708773 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:53.708832 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:53.753123 1898413 cri.go:89] found id: ""
	I0414 15:36:53.753151 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.753175 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:53.753183 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:53.753247 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:53.796276 1898413 cri.go:89] found id: ""
	I0414 15:36:53.796308 1898413 logs.go:282] 0 containers: []
	W0414 15:36:53.796319 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:53.796331 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:53.796344 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:53.875983 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:53.876025 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:53.915839 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:53.915868 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:53.968844 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:53.968884 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:53.983711 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:53.983741 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:54.059040 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:56.559340 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:56.573716 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:56.573798 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:56.613038 1898413 cri.go:89] found id: ""
	I0414 15:36:56.613071 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.613079 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:56.613086 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:56.613139 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:56.660618 1898413 cri.go:89] found id: ""
	I0414 15:36:56.660661 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.660674 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:56.660683 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:56.660756 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:56.704692 1898413 cri.go:89] found id: ""
	I0414 15:36:56.704730 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.704743 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:56.704751 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:56.704823 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:56.745731 1898413 cri.go:89] found id: ""
	I0414 15:36:56.745793 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.745807 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:56.745815 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:56.745891 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:56.782449 1898413 cri.go:89] found id: ""
	I0414 15:36:56.782483 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.782495 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:56.782503 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:56.782592 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:56.829199 1898413 cri.go:89] found id: ""
	I0414 15:36:56.829241 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.829253 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:56.829261 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:56.829330 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:36:56.871954 1898413 cri.go:89] found id: ""
	I0414 15:36:56.871988 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.871997 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:36:56.872004 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:36:56.872066 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:36:56.912159 1898413 cri.go:89] found id: ""
	I0414 15:36:56.912190 1898413 logs.go:282] 0 containers: []
	W0414 15:36:56.912203 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:36:56.912214 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:36:56.912229 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:36:56.967713 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:36:56.967751 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:36:56.983459 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:36:56.983489 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:36:57.069167 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:36:57.069192 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:36:57.069208 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:36:57.159941 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:36:57.159989 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:36:59.707037 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:36:59.721713 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:36:59.721798 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:36:59.770046 1898413 cri.go:89] found id: ""
	I0414 15:36:59.770079 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.770091 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:36:59.770099 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:36:59.770170 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:36:59.812502 1898413 cri.go:89] found id: ""
	I0414 15:36:59.812536 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.812548 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:36:59.812555 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:36:59.812621 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:36:59.850724 1898413 cri.go:89] found id: ""
	I0414 15:36:59.850751 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.850759 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:36:59.850765 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:36:59.850837 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:36:59.886212 1898413 cri.go:89] found id: ""
	I0414 15:36:59.886247 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.886259 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:36:59.886268 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:36:59.886334 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:36:59.923191 1898413 cri.go:89] found id: ""
	I0414 15:36:59.923219 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.923227 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:36:59.923233 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:36:59.923295 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:36:59.964514 1898413 cri.go:89] found id: ""
	I0414 15:36:59.964563 1898413 logs.go:282] 0 containers: []
	W0414 15:36:59.964576 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:36:59.964584 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:36:59.964650 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:00.009066 1898413 cri.go:89] found id: ""
	I0414 15:37:00.009102 1898413 logs.go:282] 0 containers: []
	W0414 15:37:00.009111 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:00.009118 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:00.009179 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:00.053978 1898413 cri.go:89] found id: ""
	I0414 15:37:00.054009 1898413 logs.go:282] 0 containers: []
	W0414 15:37:00.054020 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:00.054032 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:00.054047 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:00.106186 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:00.106236 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:00.121721 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:00.121769 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:00.203350 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:00.203376 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:00.203393 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:00.284397 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:00.284446 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:02.834521 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:02.849020 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:02.849108 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:02.893717 1898413 cri.go:89] found id: ""
	I0414 15:37:02.893754 1898413 logs.go:282] 0 containers: []
	W0414 15:37:02.893766 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:02.893775 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:02.893853 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:02.934114 1898413 cri.go:89] found id: ""
	I0414 15:37:02.934149 1898413 logs.go:282] 0 containers: []
	W0414 15:37:02.934161 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:02.934169 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:02.934242 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:02.972696 1898413 cri.go:89] found id: ""
	I0414 15:37:02.972816 1898413 logs.go:282] 0 containers: []
	W0414 15:37:02.972832 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:02.972841 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:02.972913 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:03.015753 1898413 cri.go:89] found id: ""
	I0414 15:37:03.015782 1898413 logs.go:282] 0 containers: []
	W0414 15:37:03.015791 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:03.015798 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:03.015862 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:03.055300 1898413 cri.go:89] found id: ""
	I0414 15:37:03.055328 1898413 logs.go:282] 0 containers: []
	W0414 15:37:03.055339 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:03.055347 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:03.055489 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:03.097961 1898413 cri.go:89] found id: ""
	I0414 15:37:03.097988 1898413 logs.go:282] 0 containers: []
	W0414 15:37:03.097999 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:03.098008 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:03.098074 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:03.138739 1898413 cri.go:89] found id: ""
	I0414 15:37:03.138768 1898413 logs.go:282] 0 containers: []
	W0414 15:37:03.138779 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:03.138796 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:03.138868 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:03.175857 1898413 cri.go:89] found id: ""
	I0414 15:37:03.175893 1898413 logs.go:282] 0 containers: []
	W0414 15:37:03.175904 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:03.175924 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:03.175941 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:03.231234 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:03.231267 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:03.249431 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:03.249477 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:03.332275 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:03.332306 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:03.332324 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:03.420031 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:03.420086 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:05.973084 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:05.990682 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:05.990764 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:06.032085 1898413 cri.go:89] found id: ""
	I0414 15:37:06.032120 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.032133 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:06.032140 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:06.032221 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:06.073097 1898413 cri.go:89] found id: ""
	I0414 15:37:06.073138 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.073150 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:06.073159 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:06.073228 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:06.110581 1898413 cri.go:89] found id: ""
	I0414 15:37:06.110620 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.110632 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:06.110641 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:06.110697 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:06.151862 1898413 cri.go:89] found id: ""
	I0414 15:37:06.151906 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.151916 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:06.151925 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:06.152039 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:06.192616 1898413 cri.go:89] found id: ""
	I0414 15:37:06.192643 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.192651 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:06.192657 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:06.192720 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:06.236114 1898413 cri.go:89] found id: ""
	I0414 15:37:06.236147 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.236159 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:06.236169 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:06.236237 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:06.270668 1898413 cri.go:89] found id: ""
	I0414 15:37:06.270699 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.270708 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:06.270714 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:06.270775 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:06.307007 1898413 cri.go:89] found id: ""
	I0414 15:37:06.307034 1898413 logs.go:282] 0 containers: []
	W0414 15:37:06.307042 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:06.307052 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:06.307064 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:06.367109 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:06.367151 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:06.381326 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:06.381356 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:06.454914 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:06.454948 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:06.454964 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:06.534174 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:06.534223 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:09.077251 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:09.091201 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:09.091266 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:09.127535 1898413 cri.go:89] found id: ""
	I0414 15:37:09.127573 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.127585 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:09.127593 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:09.127659 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:09.166080 1898413 cri.go:89] found id: ""
	I0414 15:37:09.166114 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.166126 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:09.166134 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:09.166192 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:09.202589 1898413 cri.go:89] found id: ""
	I0414 15:37:09.202616 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.202626 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:09.202632 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:09.202685 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:09.238436 1898413 cri.go:89] found id: ""
	I0414 15:37:09.238467 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.238476 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:09.238483 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:09.238538 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:09.277169 1898413 cri.go:89] found id: ""
	I0414 15:37:09.277205 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.277217 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:09.277226 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:09.277290 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:09.318441 1898413 cri.go:89] found id: ""
	I0414 15:37:09.318477 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.318489 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:09.318497 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:09.318578 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:09.358254 1898413 cri.go:89] found id: ""
	I0414 15:37:09.358291 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.358304 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:09.358312 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:09.358405 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:09.394703 1898413 cri.go:89] found id: ""
	I0414 15:37:09.394730 1898413 logs.go:282] 0 containers: []
	W0414 15:37:09.394739 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:09.394748 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:09.394760 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:09.449319 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:09.449370 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:09.463593 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:09.463626 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:09.535392 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:09.535425 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:09.535445 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:09.620703 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:09.620754 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:12.164730 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:12.179120 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:12.179187 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:12.214321 1898413 cri.go:89] found id: ""
	I0414 15:37:12.214355 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.214376 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:12.214385 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:12.214463 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:12.252470 1898413 cri.go:89] found id: ""
	I0414 15:37:12.252501 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.252512 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:12.252521 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:12.252588 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:12.291527 1898413 cri.go:89] found id: ""
	I0414 15:37:12.291561 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.291572 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:12.291581 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:12.291639 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:12.333133 1898413 cri.go:89] found id: ""
	I0414 15:37:12.333177 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.333188 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:12.333196 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:12.333266 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:12.370225 1898413 cri.go:89] found id: ""
	I0414 15:37:12.370264 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.370275 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:12.370284 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:12.370351 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:12.409315 1898413 cri.go:89] found id: ""
	I0414 15:37:12.409355 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.409368 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:12.409377 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:12.409450 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:12.446920 1898413 cri.go:89] found id: ""
	I0414 15:37:12.446952 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.446960 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:12.446966 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:12.447020 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:12.487092 1898413 cri.go:89] found id: ""
	I0414 15:37:12.487131 1898413 logs.go:282] 0 containers: []
	W0414 15:37:12.487142 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:12.487154 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:12.487170 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:12.567557 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:12.567612 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:12.612829 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:12.612871 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:12.665288 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:12.665335 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:12.680026 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:12.680054 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:12.753625 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:15.254746 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:15.268795 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:15.268881 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:15.309083 1898413 cri.go:89] found id: ""
	I0414 15:37:15.309132 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.309141 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:15.309148 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:15.309221 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:15.344649 1898413 cri.go:89] found id: ""
	I0414 15:37:15.344687 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.344696 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:15.344702 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:15.344769 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:15.380968 1898413 cri.go:89] found id: ""
	I0414 15:37:15.381005 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.381016 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:15.381025 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:15.381097 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:15.416567 1898413 cri.go:89] found id: ""
	I0414 15:37:15.416605 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.416618 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:15.416626 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:15.416694 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:15.457772 1898413 cri.go:89] found id: ""
	I0414 15:37:15.457801 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.457810 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:15.457816 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:15.457876 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:15.497315 1898413 cri.go:89] found id: ""
	I0414 15:37:15.497346 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.497355 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:15.497362 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:15.497416 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:15.533803 1898413 cri.go:89] found id: ""
	I0414 15:37:15.533837 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.533846 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:15.533853 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:15.533908 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:15.570114 1898413 cri.go:89] found id: ""
	I0414 15:37:15.570146 1898413 logs.go:282] 0 containers: []
	W0414 15:37:15.570157 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:15.570169 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:15.570182 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:15.621594 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:15.621646 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:15.636312 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:15.636348 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:15.715239 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:15.715267 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:15.715283 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:15.791609 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:15.791655 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:18.334532 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:18.349721 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:18.349803 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:18.392398 1898413 cri.go:89] found id: ""
	I0414 15:37:18.392433 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.392445 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:18.392456 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:18.392526 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:18.432052 1898413 cri.go:89] found id: ""
	I0414 15:37:18.432088 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.432097 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:18.432103 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:18.432159 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:18.470391 1898413 cri.go:89] found id: ""
	I0414 15:37:18.470426 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.470435 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:18.470442 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:18.470501 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:18.515768 1898413 cri.go:89] found id: ""
	I0414 15:37:18.515805 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.515815 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:18.515823 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:18.515917 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:18.561089 1898413 cri.go:89] found id: ""
	I0414 15:37:18.561117 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.561125 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:18.561131 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:18.561201 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:18.600702 1898413 cri.go:89] found id: ""
	I0414 15:37:18.600736 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.600745 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:18.600752 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:18.600814 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:18.647085 1898413 cri.go:89] found id: ""
	I0414 15:37:18.647109 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.647117 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:18.647123 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:18.647187 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:18.685370 1898413 cri.go:89] found id: ""
	I0414 15:37:18.685403 1898413 logs.go:282] 0 containers: []
	W0414 15:37:18.685416 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:18.685429 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:18.685447 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:18.763681 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:18.763731 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:18.793408 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:18.793442 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:18.888815 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:18.888846 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:18.888860 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:18.974892 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:18.974939 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:21.518521 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:21.541168 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:21.541249 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:21.599275 1898413 cri.go:89] found id: ""
	I0414 15:37:21.599307 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.599333 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:21.599341 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:21.599429 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:21.647623 1898413 cri.go:89] found id: ""
	I0414 15:37:21.647658 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.647672 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:21.647681 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:21.647749 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:21.697036 1898413 cri.go:89] found id: ""
	I0414 15:37:21.697076 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.697088 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:21.697096 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:21.697176 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:21.755164 1898413 cri.go:89] found id: ""
	I0414 15:37:21.755195 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.755206 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:21.755214 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:21.755287 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:21.804659 1898413 cri.go:89] found id: ""
	I0414 15:37:21.804695 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.804708 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:21.804717 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:21.804795 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:21.857585 1898413 cri.go:89] found id: ""
	I0414 15:37:21.857615 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.857625 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:21.857632 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:21.857685 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:21.906865 1898413 cri.go:89] found id: ""
	I0414 15:37:21.906897 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.906908 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:21.906915 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:21.906969 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:21.960021 1898413 cri.go:89] found id: ""
	I0414 15:37:21.960057 1898413 logs.go:282] 0 containers: []
	W0414 15:37:21.960070 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:21.960084 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:21.960100 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:21.981082 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:21.981127 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:22.066787 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:22.066815 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:22.066832 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:22.149032 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:22.149080 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:22.192053 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:22.192094 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:24.761418 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:24.800193 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:24.800277 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:24.865062 1898413 cri.go:89] found id: ""
	I0414 15:37:24.865100 1898413 logs.go:282] 0 containers: []
	W0414 15:37:24.865112 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:24.865122 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:24.865208 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:24.909169 1898413 cri.go:89] found id: ""
	I0414 15:37:24.909204 1898413 logs.go:282] 0 containers: []
	W0414 15:37:24.909218 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:24.909226 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:24.909306 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:24.958554 1898413 cri.go:89] found id: ""
	I0414 15:37:24.958589 1898413 logs.go:282] 0 containers: []
	W0414 15:37:24.958602 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:24.958611 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:24.958680 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:25.005143 1898413 cri.go:89] found id: ""
	I0414 15:37:25.005177 1898413 logs.go:282] 0 containers: []
	W0414 15:37:25.005187 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:25.005194 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:25.005254 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:25.046433 1898413 cri.go:89] found id: ""
	I0414 15:37:25.046465 1898413 logs.go:282] 0 containers: []
	W0414 15:37:25.046479 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:25.046487 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:25.046578 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:25.085424 1898413 cri.go:89] found id: ""
	I0414 15:37:25.085452 1898413 logs.go:282] 0 containers: []
	W0414 15:37:25.085460 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:25.085467 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:25.085527 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:25.131311 1898413 cri.go:89] found id: ""
	I0414 15:37:25.131340 1898413 logs.go:282] 0 containers: []
	W0414 15:37:25.131352 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:25.131361 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:25.131423 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:25.171461 1898413 cri.go:89] found id: ""
	I0414 15:37:25.171494 1898413 logs.go:282] 0 containers: []
	W0414 15:37:25.171507 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:25.171521 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:25.171540 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:25.188598 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:25.188650 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:25.285012 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:25.285040 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:25.285055 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:25.373751 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:25.373808 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:25.419921 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:25.419958 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:27.983794 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:27.997992 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:27.998072 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:28.039403 1898413 cri.go:89] found id: ""
	I0414 15:37:28.039430 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.039438 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:28.039445 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:28.039519 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:28.080566 1898413 cri.go:89] found id: ""
	I0414 15:37:28.080605 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.080617 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:28.080627 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:28.080695 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:28.121708 1898413 cri.go:89] found id: ""
	I0414 15:37:28.121741 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.121750 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:28.121756 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:28.121813 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:28.159213 1898413 cri.go:89] found id: ""
	I0414 15:37:28.159261 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.159272 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:28.159281 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:28.159360 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:28.199295 1898413 cri.go:89] found id: ""
	I0414 15:37:28.199326 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.199338 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:28.199347 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:28.199417 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:28.238742 1898413 cri.go:89] found id: ""
	I0414 15:37:28.238772 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.238789 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:28.238798 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:28.238868 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:28.278500 1898413 cri.go:89] found id: ""
	I0414 15:37:28.278541 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.278554 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:28.278563 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:28.278633 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:28.315877 1898413 cri.go:89] found id: ""
	I0414 15:37:28.315909 1898413 logs.go:282] 0 containers: []
	W0414 15:37:28.315922 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:28.315935 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:28.315957 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:28.358586 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:28.358622 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:28.420585 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:28.420637 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:28.436877 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:28.436924 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:28.525897 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:28.525921 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:28.525937 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:31.113288 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:31.127203 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:31.127288 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:31.163879 1898413 cri.go:89] found id: ""
	I0414 15:37:31.163915 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.163927 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:31.163935 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:31.164003 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:31.200107 1898413 cri.go:89] found id: ""
	I0414 15:37:31.200145 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.200156 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:31.200164 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:31.200232 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:31.242542 1898413 cri.go:89] found id: ""
	I0414 15:37:31.242583 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.242596 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:31.242605 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:31.242675 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:31.282413 1898413 cri.go:89] found id: ""
	I0414 15:37:31.282451 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.282462 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:31.282472 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:31.282570 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:31.323217 1898413 cri.go:89] found id: ""
	I0414 15:37:31.323247 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.323260 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:31.323267 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:31.323343 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:31.365099 1898413 cri.go:89] found id: ""
	I0414 15:37:31.365136 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.365147 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:31.365156 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:31.365225 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:31.407148 1898413 cri.go:89] found id: ""
	I0414 15:37:31.407177 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.407185 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:31.407191 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:31.407249 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:31.448905 1898413 cri.go:89] found id: ""
	I0414 15:37:31.448942 1898413 logs.go:282] 0 containers: []
	W0414 15:37:31.448954 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:31.448966 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:31.448982 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:31.528982 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:31.529018 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:31.529040 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:31.613368 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:31.613412 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:31.666879 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:31.666926 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:31.720557 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:31.720612 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:34.238509 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:34.252798 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:34.252888 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:34.295551 1898413 cri.go:89] found id: ""
	I0414 15:37:34.295584 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.295597 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:34.295605 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:34.295678 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:34.342046 1898413 cri.go:89] found id: ""
	I0414 15:37:34.342076 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.342088 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:34.342097 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:34.342163 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:34.384562 1898413 cri.go:89] found id: ""
	I0414 15:37:34.384606 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.384629 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:34.384637 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:34.384708 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:34.424431 1898413 cri.go:89] found id: ""
	I0414 15:37:34.424469 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.424486 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:34.424495 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:34.424560 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:34.468914 1898413 cri.go:89] found id: ""
	I0414 15:37:34.468963 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.468975 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:34.468983 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:34.469057 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:34.509590 1898413 cri.go:89] found id: ""
	I0414 15:37:34.509623 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.509634 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:34.509642 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:34.509711 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:34.557285 1898413 cri.go:89] found id: ""
	I0414 15:37:34.557311 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.557322 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:34.557330 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:34.557398 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:34.603423 1898413 cri.go:89] found id: ""
	I0414 15:37:34.603462 1898413 logs.go:282] 0 containers: []
	W0414 15:37:34.603474 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:34.603486 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:34.603512 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:34.657280 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:34.657324 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:34.673020 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:34.673063 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:34.754531 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:34.754554 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:34.754574 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:34.838315 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:34.838361 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:37.394539 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:37.409762 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:37.409860 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:37.451724 1898413 cri.go:89] found id: ""
	I0414 15:37:37.451757 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.451767 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:37.451773 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:37.451844 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:37.499729 1898413 cri.go:89] found id: ""
	I0414 15:37:37.499768 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.499784 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:37.499795 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:37.499885 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:37.544657 1898413 cri.go:89] found id: ""
	I0414 15:37:37.544691 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.544702 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:37.544708 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:37.544774 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:37.586360 1898413 cri.go:89] found id: ""
	I0414 15:37:37.586420 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.586429 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:37.586435 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:37.586496 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:37.630832 1898413 cri.go:89] found id: ""
	I0414 15:37:37.630864 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.630872 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:37.630878 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:37.630943 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:37.673957 1898413 cri.go:89] found id: ""
	I0414 15:37:37.673999 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.674012 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:37.674021 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:37.674094 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:37.713830 1898413 cri.go:89] found id: ""
	I0414 15:37:37.713871 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.713882 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:37.713891 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:37.713961 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:37.756924 1898413 cri.go:89] found id: ""
	I0414 15:37:37.756951 1898413 logs.go:282] 0 containers: []
	W0414 15:37:37.756959 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:37.756970 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:37.756983 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:37.834923 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:37.834955 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:37.834973 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:37.927996 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:37.928044 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:37.982948 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:37.982985 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:38.038323 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:38.038392 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:40.555177 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:40.573959 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:40.574046 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:40.614938 1898413 cri.go:89] found id: ""
	I0414 15:37:40.614980 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.614993 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:40.615001 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:40.615064 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:40.655306 1898413 cri.go:89] found id: ""
	I0414 15:37:40.655345 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.655358 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:40.655367 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:40.655438 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:40.700322 1898413 cri.go:89] found id: ""
	I0414 15:37:40.700357 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.700376 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:40.700383 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:40.700469 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:40.743056 1898413 cri.go:89] found id: ""
	I0414 15:37:40.743084 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.743095 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:40.743103 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:40.743171 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:40.783747 1898413 cri.go:89] found id: ""
	I0414 15:37:40.783782 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.783795 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:40.783802 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:40.783870 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:40.825731 1898413 cri.go:89] found id: ""
	I0414 15:37:40.825768 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.825782 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:40.825791 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:40.825863 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:40.866070 1898413 cri.go:89] found id: ""
	I0414 15:37:40.866100 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.866113 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:40.866121 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:40.866191 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:40.904430 1898413 cri.go:89] found id: ""
	I0414 15:37:40.904464 1898413 logs.go:282] 0 containers: []
	W0414 15:37:40.904477 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:40.904490 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:40.904522 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:40.963829 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:40.963887 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:40.980402 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:40.980447 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:41.076838 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:41.076870 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:41.076887 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:41.180390 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:41.180429 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:43.736404 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:43.751005 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:43.751089 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:43.794195 1898413 cri.go:89] found id: ""
	I0414 15:37:43.794229 1898413 logs.go:282] 0 containers: []
	W0414 15:37:43.794243 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:43.794251 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:43.794324 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:43.839455 1898413 cri.go:89] found id: ""
	I0414 15:37:43.839488 1898413 logs.go:282] 0 containers: []
	W0414 15:37:43.839499 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:43.839506 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:43.839589 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:43.883491 1898413 cri.go:89] found id: ""
	I0414 15:37:43.883515 1898413 logs.go:282] 0 containers: []
	W0414 15:37:43.883526 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:43.883535 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:43.883597 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:43.928838 1898413 cri.go:89] found id: ""
	I0414 15:37:43.928874 1898413 logs.go:282] 0 containers: []
	W0414 15:37:43.928885 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:43.928891 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:43.928947 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:43.970832 1898413 cri.go:89] found id: ""
	I0414 15:37:43.970865 1898413 logs.go:282] 0 containers: []
	W0414 15:37:43.970876 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:43.970885 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:43.970955 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:44.012429 1898413 cri.go:89] found id: ""
	I0414 15:37:44.012473 1898413 logs.go:282] 0 containers: []
	W0414 15:37:44.012486 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:44.012495 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:44.012558 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:44.049347 1898413 cri.go:89] found id: ""
	I0414 15:37:44.049382 1898413 logs.go:282] 0 containers: []
	W0414 15:37:44.049394 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:44.049402 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:44.049483 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:44.091941 1898413 cri.go:89] found id: ""
	I0414 15:37:44.091980 1898413 logs.go:282] 0 containers: []
	W0414 15:37:44.091992 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:44.092006 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:44.092022 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:44.191196 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:44.191242 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:44.240137 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:44.240177 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:44.314707 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:44.314749 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:44.334697 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:44.334733 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:44.411430 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:46.912425 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:46.926186 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:46.926258 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:46.969257 1898413 cri.go:89] found id: ""
	I0414 15:37:46.969293 1898413 logs.go:282] 0 containers: []
	W0414 15:37:46.969303 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:46.969309 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:46.969367 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:47.007953 1898413 cri.go:89] found id: ""
	I0414 15:37:47.007984 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.007993 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:47.007999 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:47.008072 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:47.048121 1898413 cri.go:89] found id: ""
	I0414 15:37:47.048148 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.048156 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:47.048164 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:47.048219 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:47.091557 1898413 cri.go:89] found id: ""
	I0414 15:37:47.091590 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.091602 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:47.091611 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:47.091684 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:47.133593 1898413 cri.go:89] found id: ""
	I0414 15:37:47.133626 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.133637 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:47.133648 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:47.133718 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:47.170054 1898413 cri.go:89] found id: ""
	I0414 15:37:47.170087 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.170096 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:47.170103 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:47.170158 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:47.210724 1898413 cri.go:89] found id: ""
	I0414 15:37:47.210756 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.210767 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:47.210775 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:47.210857 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:47.249407 1898413 cri.go:89] found id: ""
	I0414 15:37:47.249439 1898413 logs.go:282] 0 containers: []
	W0414 15:37:47.249447 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:47.249458 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:47.249470 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:47.264146 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:47.264187 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:47.358909 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:47.358931 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:47.358946 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:47.444959 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:47.445012 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:47.488409 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:47.488456 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:50.042288 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:50.056997 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:50.057080 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:50.095135 1898413 cri.go:89] found id: ""
	I0414 15:37:50.095180 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.095193 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:50.095202 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:50.095276 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:50.137448 1898413 cri.go:89] found id: ""
	I0414 15:37:50.137472 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.137480 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:50.137485 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:50.137536 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:50.175853 1898413 cri.go:89] found id: ""
	I0414 15:37:50.175887 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.175899 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:50.175907 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:50.175980 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:50.217413 1898413 cri.go:89] found id: ""
	I0414 15:37:50.217443 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.217453 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:50.217461 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:50.217525 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:50.256726 1898413 cri.go:89] found id: ""
	I0414 15:37:50.256754 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.256763 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:50.256768 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:50.256833 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:50.295003 1898413 cri.go:89] found id: ""
	I0414 15:37:50.295043 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.295055 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:50.295064 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:50.295133 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:50.335718 1898413 cri.go:89] found id: ""
	I0414 15:37:50.335745 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.335755 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:50.335762 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:50.335824 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:50.377843 1898413 cri.go:89] found id: ""
	I0414 15:37:50.377877 1898413 logs.go:282] 0 containers: []
	W0414 15:37:50.377892 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:50.377903 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:50.377918 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:50.423466 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:50.423509 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:50.478902 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:50.478947 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:50.495225 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:50.495256 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:50.575722 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:50.575752 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:50.575770 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:53.158642 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:53.179822 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:53.179913 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:53.218481 1898413 cri.go:89] found id: ""
	I0414 15:37:53.218518 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.218529 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:53.218559 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:53.218628 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:53.262717 1898413 cri.go:89] found id: ""
	I0414 15:37:53.262751 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.262763 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:53.262771 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:53.262842 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:53.302789 1898413 cri.go:89] found id: ""
	I0414 15:37:53.302830 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.302843 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:53.302853 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:53.302950 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:53.340050 1898413 cri.go:89] found id: ""
	I0414 15:37:53.340085 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.340095 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:53.340103 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:53.340174 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:53.379449 1898413 cri.go:89] found id: ""
	I0414 15:37:53.379479 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.379488 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:53.379494 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:53.379551 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:53.421605 1898413 cri.go:89] found id: ""
	I0414 15:37:53.421633 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.421642 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:53.421648 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:53.421703 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:53.459197 1898413 cri.go:89] found id: ""
	I0414 15:37:53.459235 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.459243 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:53.459249 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:53.459303 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:53.504687 1898413 cri.go:89] found id: ""
	I0414 15:37:53.504724 1898413 logs.go:282] 0 containers: []
	W0414 15:37:53.504735 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:53.504756 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:53.504774 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:53.586486 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:53.586524 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:53.586538 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:53.681182 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:53.681241 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:53.737345 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:53.737387 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:53.813028 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:53.813077 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:56.332287 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:56.351993 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:56.352084 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:56.395140 1898413 cri.go:89] found id: ""
	I0414 15:37:56.395177 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.395190 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:56.395199 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:56.395265 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:56.432998 1898413 cri.go:89] found id: ""
	I0414 15:37:56.433032 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.433044 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:56.433051 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:56.433123 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:56.470969 1898413 cri.go:89] found id: ""
	I0414 15:37:56.471006 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.471018 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:56.471027 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:56.471090 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:56.510462 1898413 cri.go:89] found id: ""
	I0414 15:37:56.510491 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.510502 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:56.510510 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:56.510577 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:56.547896 1898413 cri.go:89] found id: ""
	I0414 15:37:56.547932 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.547945 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:56.547953 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:56.548028 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:56.585341 1898413 cri.go:89] found id: ""
	I0414 15:37:56.585375 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.585383 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:56.585391 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:56.585465 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:56.626634 1898413 cri.go:89] found id: ""
	I0414 15:37:56.626662 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.626674 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:56.626682 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:56.626756 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:56.672360 1898413 cri.go:89] found id: ""
	I0414 15:37:56.672398 1898413 logs.go:282] 0 containers: []
	W0414 15:37:56.672410 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:56.672422 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:56.672438 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:56.752730 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:56.752767 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:56.752785 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:37:56.831469 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:37:56.831514 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:37:56.874849 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:56.874887 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:56.937689 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:56.937745 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:59.455217 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:37:59.472821 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:37:59.472918 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:37:59.512542 1898413 cri.go:89] found id: ""
	I0414 15:37:59.512578 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.512590 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:37:59.512599 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:37:59.512671 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:37:59.569884 1898413 cri.go:89] found id: ""
	I0414 15:37:59.569919 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.569930 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:37:59.569938 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:37:59.569993 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:37:59.617232 1898413 cri.go:89] found id: ""
	I0414 15:37:59.617266 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.617274 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:37:59.617280 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:37:59.617346 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:37:59.659934 1898413 cri.go:89] found id: ""
	I0414 15:37:59.659973 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.659985 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:37:59.659994 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:37:59.660052 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:37:59.701882 1898413 cri.go:89] found id: ""
	I0414 15:37:59.701914 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.701925 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:37:59.701932 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:37:59.702018 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:37:59.742951 1898413 cri.go:89] found id: ""
	I0414 15:37:59.742984 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.742993 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:37:59.742999 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:37:59.743059 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:37:59.790226 1898413 cri.go:89] found id: ""
	I0414 15:37:59.790255 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.790263 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:37:59.790269 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:37:59.790328 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:37:59.831232 1898413 cri.go:89] found id: ""
	I0414 15:37:59.831261 1898413 logs.go:282] 0 containers: []
	W0414 15:37:59.831270 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:37:59.831283 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:37:59.831297 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:37:59.887068 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:37:59.887111 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:37:59.902151 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:37:59.902195 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:37:59.979184 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:37:59.979216 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:37:59.979234 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:00.062736 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:00.062778 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:02.605915 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:02.623206 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:02.623299 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:02.667084 1898413 cri.go:89] found id: ""
	I0414 15:38:02.667114 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.667127 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:02.667135 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:02.667210 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:02.706718 1898413 cri.go:89] found id: ""
	I0414 15:38:02.706760 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.706776 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:02.706784 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:02.706851 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:02.749368 1898413 cri.go:89] found id: ""
	I0414 15:38:02.749407 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.749420 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:02.749428 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:02.749495 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:02.791461 1898413 cri.go:89] found id: ""
	I0414 15:38:02.791498 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.791510 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:02.791518 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:02.791634 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:02.831175 1898413 cri.go:89] found id: ""
	I0414 15:38:02.831225 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.831238 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:02.831247 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:02.831320 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:02.870632 1898413 cri.go:89] found id: ""
	I0414 15:38:02.870669 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.870682 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:02.870691 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:02.870762 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:02.911650 1898413 cri.go:89] found id: ""
	I0414 15:38:02.911680 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.911688 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:02.911695 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:02.911749 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:02.953311 1898413 cri.go:89] found id: ""
	I0414 15:38:02.953347 1898413 logs.go:282] 0 containers: []
	W0414 15:38:02.953360 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:02.953373 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:02.953390 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:03.029448 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:03.029516 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:03.045039 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:03.045072 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:03.123121 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:03.123154 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:03.123176 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:03.201836 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:03.201884 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:05.759652 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:05.774008 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:05.774101 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:05.822185 1898413 cri.go:89] found id: ""
	I0414 15:38:05.822220 1898413 logs.go:282] 0 containers: []
	W0414 15:38:05.822232 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:05.822241 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:05.822305 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:05.863520 1898413 cri.go:89] found id: ""
	I0414 15:38:05.863559 1898413 logs.go:282] 0 containers: []
	W0414 15:38:05.863573 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:05.863580 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:05.863666 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:05.912825 1898413 cri.go:89] found id: ""
	I0414 15:38:05.912864 1898413 logs.go:282] 0 containers: []
	W0414 15:38:05.912878 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:05.912888 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:05.912964 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:05.963531 1898413 cri.go:89] found id: ""
	I0414 15:38:05.963560 1898413 logs.go:282] 0 containers: []
	W0414 15:38:05.963571 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:05.963577 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:05.963631 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:06.014603 1898413 cri.go:89] found id: ""
	I0414 15:38:06.014653 1898413 logs.go:282] 0 containers: []
	W0414 15:38:06.014666 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:06.014674 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:06.014748 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:06.064722 1898413 cri.go:89] found id: ""
	I0414 15:38:06.064746 1898413 logs.go:282] 0 containers: []
	W0414 15:38:06.064765 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:06.064774 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:06.064846 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:06.104408 1898413 cri.go:89] found id: ""
	I0414 15:38:06.104445 1898413 logs.go:282] 0 containers: []
	W0414 15:38:06.104455 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:06.104463 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:06.104560 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:06.147138 1898413 cri.go:89] found id: ""
	I0414 15:38:06.147174 1898413 logs.go:282] 0 containers: []
	W0414 15:38:06.147185 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:06.147200 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:06.147216 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:06.200444 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:06.200491 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:06.265899 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:06.265949 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:06.282401 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:06.282442 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:06.366251 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:06.366284 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:06.366305 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:08.963004 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:08.977659 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:08.977733 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:09.013747 1898413 cri.go:89] found id: ""
	I0414 15:38:09.013792 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.013802 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:09.013808 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:09.013867 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:09.052935 1898413 cri.go:89] found id: ""
	I0414 15:38:09.052976 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.052987 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:09.052994 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:09.053071 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:09.093066 1898413 cri.go:89] found id: ""
	I0414 15:38:09.093098 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.093107 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:09.093114 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:09.093169 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:09.131422 1898413 cri.go:89] found id: ""
	I0414 15:38:09.131458 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.131471 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:09.131482 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:09.131560 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:09.169396 1898413 cri.go:89] found id: ""
	I0414 15:38:09.169431 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.169442 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:09.169448 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:09.169519 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:09.208375 1898413 cri.go:89] found id: ""
	I0414 15:38:09.208425 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.208438 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:09.208446 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:09.208514 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:09.246897 1898413 cri.go:89] found id: ""
	I0414 15:38:09.246930 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.246940 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:09.246946 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:09.247018 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:09.288350 1898413 cri.go:89] found id: ""
	I0414 15:38:09.288389 1898413 logs.go:282] 0 containers: []
	W0414 15:38:09.288402 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:09.288416 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:09.288432 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:09.343878 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:09.343927 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:09.358269 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:09.358309 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:09.430661 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:09.430716 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:09.430750 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:09.508275 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:09.508324 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:12.050517 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:12.066252 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:12.066329 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:12.104589 1898413 cri.go:89] found id: ""
	I0414 15:38:12.104627 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.104641 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:12.104650 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:12.104714 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:12.142298 1898413 cri.go:89] found id: ""
	I0414 15:38:12.142337 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.142347 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:12.142354 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:12.142438 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:12.178942 1898413 cri.go:89] found id: ""
	I0414 15:38:12.178979 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.178989 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:12.178997 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:12.179053 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:12.215083 1898413 cri.go:89] found id: ""
	I0414 15:38:12.215120 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.215132 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:12.215138 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:12.215201 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:12.253535 1898413 cri.go:89] found id: ""
	I0414 15:38:12.253572 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.253584 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:12.253592 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:12.253667 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:12.289266 1898413 cri.go:89] found id: ""
	I0414 15:38:12.289289 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.289300 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:12.289309 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:12.289367 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:12.327160 1898413 cri.go:89] found id: ""
	I0414 15:38:12.327195 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.327206 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:12.327213 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:12.327284 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:12.364279 1898413 cri.go:89] found id: ""
	I0414 15:38:12.364317 1898413 logs.go:282] 0 containers: []
	W0414 15:38:12.364329 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:12.364342 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:12.364357 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:12.418206 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:12.418265 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:12.435320 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:12.435352 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:12.520459 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:12.520494 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:12.520511 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:12.602759 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:12.602805 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:15.151175 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:15.166755 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:15.166832 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:15.204366 1898413 cri.go:89] found id: ""
	I0414 15:38:15.204401 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.204410 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:15.204416 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:15.204471 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:15.245609 1898413 cri.go:89] found id: ""
	I0414 15:38:15.245651 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.245665 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:15.245673 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:15.245751 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:15.281581 1898413 cri.go:89] found id: ""
	I0414 15:38:15.281614 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.281624 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:15.281632 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:15.281695 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:15.319186 1898413 cri.go:89] found id: ""
	I0414 15:38:15.319221 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.319235 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:15.319241 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:15.319302 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:15.355724 1898413 cri.go:89] found id: ""
	I0414 15:38:15.355761 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.355775 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:15.355781 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:15.355840 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:15.391884 1898413 cri.go:89] found id: ""
	I0414 15:38:15.391916 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.391926 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:15.391933 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:15.391987 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:15.427613 1898413 cri.go:89] found id: ""
	I0414 15:38:15.427654 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.427667 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:15.427679 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:15.427739 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:15.466068 1898413 cri.go:89] found id: ""
	I0414 15:38:15.466106 1898413 logs.go:282] 0 containers: []
	W0414 15:38:15.466118 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:15.466130 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:15.466147 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:15.525020 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:15.525068 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:15.539452 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:15.539503 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:15.611376 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:15.611401 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:15.611416 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:15.692828 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:15.692876 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:18.238542 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:18.254524 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:18.254597 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:18.291748 1898413 cri.go:89] found id: ""
	I0414 15:38:18.291790 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.291801 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:18.291810 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:18.291878 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:18.328611 1898413 cri.go:89] found id: ""
	I0414 15:38:18.328639 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.328662 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:18.328670 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:18.328738 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:18.366083 1898413 cri.go:89] found id: ""
	I0414 15:38:18.366122 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.366135 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:18.366141 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:18.366213 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:18.403709 1898413 cri.go:89] found id: ""
	I0414 15:38:18.403738 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.403748 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:18.403763 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:18.403816 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:18.451612 1898413 cri.go:89] found id: ""
	I0414 15:38:18.451647 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.451659 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:18.451667 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:18.451732 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:18.494083 1898413 cri.go:89] found id: ""
	I0414 15:38:18.494120 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.494138 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:18.494146 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:18.494217 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:18.532793 1898413 cri.go:89] found id: ""
	I0414 15:38:18.532844 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.532857 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:18.532867 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:18.532945 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:18.572814 1898413 cri.go:89] found id: ""
	I0414 15:38:18.572837 1898413 logs.go:282] 0 containers: []
	W0414 15:38:18.572847 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:18.572859 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:18.572878 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:18.626955 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:18.626997 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:18.644319 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:18.644371 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:18.726811 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:18.726848 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:18.726866 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:18.804758 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:18.804812 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:21.349969 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:21.365443 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:21.365517 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:21.405130 1898413 cri.go:89] found id: ""
	I0414 15:38:21.405176 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.405185 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:21.405192 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:21.405262 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:21.444692 1898413 cri.go:89] found id: ""
	I0414 15:38:21.444723 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.444732 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:21.444739 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:21.444796 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:21.481731 1898413 cri.go:89] found id: ""
	I0414 15:38:21.481760 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.481768 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:21.481774 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:21.481832 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:21.517629 1898413 cri.go:89] found id: ""
	I0414 15:38:21.517660 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.517673 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:21.517682 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:21.517753 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:21.551927 1898413 cri.go:89] found id: ""
	I0414 15:38:21.551964 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.551976 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:21.551984 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:21.552051 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:21.595607 1898413 cri.go:89] found id: ""
	I0414 15:38:21.595633 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.595641 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:21.595647 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:21.595701 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:21.637417 1898413 cri.go:89] found id: ""
	I0414 15:38:21.637445 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.637456 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:21.637464 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:21.637534 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:21.675843 1898413 cri.go:89] found id: ""
	I0414 15:38:21.675875 1898413 logs.go:282] 0 containers: []
	W0414 15:38:21.675885 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:21.675899 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:21.675915 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:21.751658 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:21.751693 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:21.751710 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:21.834800 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:21.834859 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:21.880705 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:21.880737 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:21.931276 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:21.931334 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:24.449241 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:24.462839 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:24.462921 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:24.502733 1898413 cri.go:89] found id: ""
	I0414 15:38:24.502767 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.502807 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:24.502817 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:24.502897 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:24.540266 1898413 cri.go:89] found id: ""
	I0414 15:38:24.540294 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.540302 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:24.540308 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:24.540364 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:24.580309 1898413 cri.go:89] found id: ""
	I0414 15:38:24.580335 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.580342 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:24.580349 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:24.580403 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:24.621048 1898413 cri.go:89] found id: ""
	I0414 15:38:24.621081 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.621092 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:24.621100 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:24.621167 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:24.659003 1898413 cri.go:89] found id: ""
	I0414 15:38:24.659034 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.659044 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:24.659049 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:24.659113 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:24.694620 1898413 cri.go:89] found id: ""
	I0414 15:38:24.694654 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.694665 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:24.694674 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:24.694738 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:24.737721 1898413 cri.go:89] found id: ""
	I0414 15:38:24.737758 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.737768 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:24.737774 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:24.737852 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:24.777688 1898413 cri.go:89] found id: ""
	I0414 15:38:24.777720 1898413 logs.go:282] 0 containers: []
	W0414 15:38:24.777732 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:24.777742 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:24.777758 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:24.793437 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:24.793483 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:24.875274 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:24.875300 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:24.875316 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:24.960453 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:24.960502 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:25.004981 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:25.005026 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:27.574293 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:27.589222 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:27.589306 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:27.627762 1898413 cri.go:89] found id: ""
	I0414 15:38:27.627795 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.627807 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:27.627815 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:27.627881 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:27.663046 1898413 cri.go:89] found id: ""
	I0414 15:38:27.663078 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.663089 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:27.663097 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:27.663167 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:27.703985 1898413 cri.go:89] found id: ""
	I0414 15:38:27.704018 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.704030 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:27.704038 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:27.704109 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:27.747906 1898413 cri.go:89] found id: ""
	I0414 15:38:27.747939 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.747958 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:27.747966 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:27.748032 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:27.784220 1898413 cri.go:89] found id: ""
	I0414 15:38:27.784320 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.784347 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:27.784356 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:27.784425 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:27.836226 1898413 cri.go:89] found id: ""
	I0414 15:38:27.836258 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.836266 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:27.836273 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:27.836336 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:27.874053 1898413 cri.go:89] found id: ""
	I0414 15:38:27.874103 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.874117 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:27.874125 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:27.874198 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:27.911815 1898413 cri.go:89] found id: ""
	I0414 15:38:27.911852 1898413 logs.go:282] 0 containers: []
	W0414 15:38:27.911864 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:27.911884 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:27.911900 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:27.969521 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:27.969562 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:27.986124 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:27.986162 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:28.065336 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:28.065361 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:28.065375 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:28.162425 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:28.162472 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:30.705813 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:30.719581 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:30.719653 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:30.759874 1898413 cri.go:89] found id: ""
	I0414 15:38:30.759905 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.759913 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:30.759920 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:30.759990 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:30.805404 1898413 cri.go:89] found id: ""
	I0414 15:38:30.805441 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.805454 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:30.805463 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:30.805538 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:30.853505 1898413 cri.go:89] found id: ""
	I0414 15:38:30.853552 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.853564 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:30.853572 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:30.853641 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:30.894357 1898413 cri.go:89] found id: ""
	I0414 15:38:30.894407 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.894419 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:30.894426 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:30.894486 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:30.932726 1898413 cri.go:89] found id: ""
	I0414 15:38:30.932762 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.932773 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:30.932781 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:30.932852 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:30.970399 1898413 cri.go:89] found id: ""
	I0414 15:38:30.970434 1898413 logs.go:282] 0 containers: []
	W0414 15:38:30.970447 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:30.970455 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:30.970550 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:31.012035 1898413 cri.go:89] found id: ""
	I0414 15:38:31.012070 1898413 logs.go:282] 0 containers: []
	W0414 15:38:31.012093 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:31.012111 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:31.012177 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:31.053663 1898413 cri.go:89] found id: ""
	I0414 15:38:31.053694 1898413 logs.go:282] 0 containers: []
	W0414 15:38:31.053702 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:31.053712 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:31.053724 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:31.108639 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:31.108687 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:31.123161 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:31.123211 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:31.199236 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:31.199261 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:31.199274 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:31.285178 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:31.285228 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:33.832094 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:33.851232 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:33.851312 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:33.901166 1898413 cri.go:89] found id: ""
	I0414 15:38:33.901206 1898413 logs.go:282] 0 containers: []
	W0414 15:38:33.901220 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:33.901229 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:33.901298 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:33.946089 1898413 cri.go:89] found id: ""
	I0414 15:38:33.946127 1898413 logs.go:282] 0 containers: []
	W0414 15:38:33.946142 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:33.946151 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:33.946217 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:33.984950 1898413 cri.go:89] found id: ""
	I0414 15:38:33.984986 1898413 logs.go:282] 0 containers: []
	W0414 15:38:33.985000 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:33.985009 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:33.985080 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:34.024966 1898413 cri.go:89] found id: ""
	I0414 15:38:34.025003 1898413 logs.go:282] 0 containers: []
	W0414 15:38:34.025014 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:34.025020 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:34.025080 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:34.065315 1898413 cri.go:89] found id: ""
	I0414 15:38:34.065364 1898413 logs.go:282] 0 containers: []
	W0414 15:38:34.065377 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:34.065386 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:34.065459 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:34.103666 1898413 cri.go:89] found id: ""
	I0414 15:38:34.103706 1898413 logs.go:282] 0 containers: []
	W0414 15:38:34.103718 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:34.103727 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:34.103796 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:34.142439 1898413 cri.go:89] found id: ""
	I0414 15:38:34.142477 1898413 logs.go:282] 0 containers: []
	W0414 15:38:34.142489 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:34.142497 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:34.142564 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:34.186404 1898413 cri.go:89] found id: ""
	I0414 15:38:34.186451 1898413 logs.go:282] 0 containers: []
	W0414 15:38:34.186459 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:34.186470 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:34.186491 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:34.254722 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:34.254771 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:34.277696 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:34.277738 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:34.385887 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:34.385921 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:34.385941 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:34.471645 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:34.471699 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:37.016337 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:37.030624 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:37.030697 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:37.069720 1898413 cri.go:89] found id: ""
	I0414 15:38:37.069755 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.069772 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:37.069779 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:37.069831 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:37.105790 1898413 cri.go:89] found id: ""
	I0414 15:38:37.105821 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.105833 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:37.105849 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:37.105913 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:37.148251 1898413 cri.go:89] found id: ""
	I0414 15:38:37.148289 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.148302 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:37.148309 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:37.148379 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:37.187152 1898413 cri.go:89] found id: ""
	I0414 15:38:37.187185 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.187194 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:37.187200 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:37.187261 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:37.232695 1898413 cri.go:89] found id: ""
	I0414 15:38:37.232731 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.232743 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:37.232752 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:37.232831 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:37.268682 1898413 cri.go:89] found id: ""
	I0414 15:38:37.268717 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.268730 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:37.268738 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:37.268831 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:37.310901 1898413 cri.go:89] found id: ""
	I0414 15:38:37.310932 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.310941 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:37.310948 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:37.311005 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:37.350339 1898413 cri.go:89] found id: ""
	I0414 15:38:37.350392 1898413 logs.go:282] 0 containers: []
	W0414 15:38:37.350405 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:37.350418 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:37.350435 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:37.403123 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:37.403168 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:37.417480 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:37.417525 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:37.487965 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:37.487994 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:37.488014 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:37.569340 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:37.569383 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:40.110644 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:40.127221 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:40.127287 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:40.177319 1898413 cri.go:89] found id: ""
	I0414 15:38:40.177350 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.177360 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:40.177370 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:40.177433 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:40.227836 1898413 cri.go:89] found id: ""
	I0414 15:38:40.227867 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.227875 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:40.227882 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:40.227947 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:40.273282 1898413 cri.go:89] found id: ""
	I0414 15:38:40.273314 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.273325 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:40.273334 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:40.273401 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:40.313946 1898413 cri.go:89] found id: ""
	I0414 15:38:40.313976 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.313985 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:40.313991 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:40.314047 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:40.356392 1898413 cri.go:89] found id: ""
	I0414 15:38:40.356426 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.356437 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:40.356445 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:40.356523 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:40.399206 1898413 cri.go:89] found id: ""
	I0414 15:38:40.399236 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.399245 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:40.399251 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:40.399313 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:40.443753 1898413 cri.go:89] found id: ""
	I0414 15:38:40.443803 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.443817 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:40.443826 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:40.443908 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:40.480696 1898413 cri.go:89] found id: ""
	I0414 15:38:40.480722 1898413 logs.go:282] 0 containers: []
	W0414 15:38:40.480730 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:40.480739 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:40.480758 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:40.564891 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:40.564947 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:40.613274 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:40.613306 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:40.667665 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:40.667714 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:40.686502 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:40.686564 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:40.761605 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:43.262654 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:43.276747 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:43.276821 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:43.317922 1898413 cri.go:89] found id: ""
	I0414 15:38:43.317953 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.317962 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:43.317969 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:43.318021 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:43.353883 1898413 cri.go:89] found id: ""
	I0414 15:38:43.353913 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.353921 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:43.353930 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:43.353986 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:43.391234 1898413 cri.go:89] found id: ""
	I0414 15:38:43.391272 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.391285 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:43.391292 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:43.391379 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:43.427749 1898413 cri.go:89] found id: ""
	I0414 15:38:43.427785 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.427794 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:43.427801 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:43.427856 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:43.467125 1898413 cri.go:89] found id: ""
	I0414 15:38:43.467156 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.467163 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:43.467169 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:43.467224 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:43.506705 1898413 cri.go:89] found id: ""
	I0414 15:38:43.506733 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.506742 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:43.506749 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:43.506818 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:43.547175 1898413 cri.go:89] found id: ""
	I0414 15:38:43.547205 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.547217 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:43.547224 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:43.547289 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:43.581946 1898413 cri.go:89] found id: ""
	I0414 15:38:43.581980 1898413 logs.go:282] 0 containers: []
	W0414 15:38:43.581989 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:43.582000 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:43.582013 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:43.595993 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:43.596032 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:43.669498 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:43.669526 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:43.669544 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:43.746578 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:43.746633 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:43.788720 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:43.788754 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:46.341819 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:46.356400 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:46.356476 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:46.393801 1898413 cri.go:89] found id: ""
	I0414 15:38:46.393840 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.393852 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:46.393861 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:46.393923 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:46.435643 1898413 cri.go:89] found id: ""
	I0414 15:38:46.435669 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.435679 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:46.435687 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:46.435753 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:46.475023 1898413 cri.go:89] found id: ""
	I0414 15:38:46.475060 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.475071 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:46.475079 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:46.475150 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:46.513254 1898413 cri.go:89] found id: ""
	I0414 15:38:46.513293 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.513305 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:46.513313 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:46.513386 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:46.550434 1898413 cri.go:89] found id: ""
	I0414 15:38:46.550466 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.550476 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:46.550483 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:46.550537 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:46.585846 1898413 cri.go:89] found id: ""
	I0414 15:38:46.585880 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.585891 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:46.585900 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:46.585962 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:46.625907 1898413 cri.go:89] found id: ""
	I0414 15:38:46.625947 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.625957 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:46.625968 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:46.626035 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:46.661293 1898413 cri.go:89] found id: ""
	I0414 15:38:46.661325 1898413 logs.go:282] 0 containers: []
	W0414 15:38:46.661337 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:46.661351 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:46.661369 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:46.717655 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:46.717700 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:46.732433 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:46.732477 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:46.807090 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:46.807111 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:46.807126 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:46.891473 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:46.891526 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:49.436292 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:49.454590 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:49.454675 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:49.512655 1898413 cri.go:89] found id: ""
	I0414 15:38:49.512687 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.512699 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:49.512706 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:49.512777 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:49.560523 1898413 cri.go:89] found id: ""
	I0414 15:38:49.560557 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.560569 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:49.560577 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:49.560650 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:49.603124 1898413 cri.go:89] found id: ""
	I0414 15:38:49.603153 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.603165 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:49.603173 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:49.603243 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:49.648343 1898413 cri.go:89] found id: ""
	I0414 15:38:49.648379 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.648392 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:49.648401 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:49.648471 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:49.694712 1898413 cri.go:89] found id: ""
	I0414 15:38:49.694749 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.694761 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:49.694769 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:49.694847 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:49.747799 1898413 cri.go:89] found id: ""
	I0414 15:38:49.747837 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.747851 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:49.747860 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:49.747931 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:49.800385 1898413 cri.go:89] found id: ""
	I0414 15:38:49.800420 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.800431 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:49.800447 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:49.800596 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:49.845867 1898413 cri.go:89] found id: ""
	I0414 15:38:49.845903 1898413 logs.go:282] 0 containers: []
	W0414 15:38:49.845915 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:49.845929 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:49.845951 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:49.912677 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:49.912720 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:49.932498 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:49.932544 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:50.039933 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:50.039957 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:50.039973 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:50.144184 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:50.144230 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:52.696677 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:52.711922 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:52.712016 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:52.758996 1898413 cri.go:89] found id: ""
	I0414 15:38:52.759033 1898413 logs.go:282] 0 containers: []
	W0414 15:38:52.759046 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:52.759055 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:52.759127 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:52.814822 1898413 cri.go:89] found id: ""
	I0414 15:38:52.814858 1898413 logs.go:282] 0 containers: []
	W0414 15:38:52.814870 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:52.814878 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:52.814948 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:52.865377 1898413 cri.go:89] found id: ""
	I0414 15:38:52.865414 1898413 logs.go:282] 0 containers: []
	W0414 15:38:52.865426 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:52.865435 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:52.865507 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:52.907975 1898413 cri.go:89] found id: ""
	I0414 15:38:52.908004 1898413 logs.go:282] 0 containers: []
	W0414 15:38:52.908016 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:52.908024 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:52.908099 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:52.955739 1898413 cri.go:89] found id: ""
	I0414 15:38:52.955787 1898413 logs.go:282] 0 containers: []
	W0414 15:38:52.955800 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:52.955808 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:52.955884 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:53.008088 1898413 cri.go:89] found id: ""
	I0414 15:38:53.008118 1898413 logs.go:282] 0 containers: []
	W0414 15:38:53.008129 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:53.008137 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:53.008208 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:53.058277 1898413 cri.go:89] found id: ""
	I0414 15:38:53.058316 1898413 logs.go:282] 0 containers: []
	W0414 15:38:53.058327 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:53.058335 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:53.058438 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:53.101205 1898413 cri.go:89] found id: ""
	I0414 15:38:53.101243 1898413 logs.go:282] 0 containers: []
	W0414 15:38:53.101256 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:53.101269 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:53.101288 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:53.208057 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:53.208089 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:53.208106 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:53.317234 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:53.317283 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:53.362760 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:53.362797 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:53.427781 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:53.427846 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:55.945112 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:55.960416 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:55.960495 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:55.999973 1898413 cri.go:89] found id: ""
	I0414 15:38:56.000005 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.000019 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:56.000028 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:56.000109 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:56.043426 1898413 cri.go:89] found id: ""
	I0414 15:38:56.043465 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.043480 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:56.043496 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:56.043566 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:56.097607 1898413 cri.go:89] found id: ""
	I0414 15:38:56.097644 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.097656 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:56.097663 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:56.097733 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:56.140945 1898413 cri.go:89] found id: ""
	I0414 15:38:56.140979 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.140990 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:56.140997 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:56.141066 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:56.182987 1898413 cri.go:89] found id: ""
	I0414 15:38:56.183019 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.183031 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:56.183039 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:56.183115 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:56.224938 1898413 cri.go:89] found id: ""
	I0414 15:38:56.224971 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.224984 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:56.224992 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:56.225072 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:56.264810 1898413 cri.go:89] found id: ""
	I0414 15:38:56.264841 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.264852 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:56.264859 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:56.264934 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:56.307791 1898413 cri.go:89] found id: ""
	I0414 15:38:56.307819 1898413 logs.go:282] 0 containers: []
	W0414 15:38:56.307827 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:56.307837 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:56.307855 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:56.375565 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:56.375626 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:56.394836 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:56.394882 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:56.482830 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:56.482863 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:56.482882 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:56.564456 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:56.564508 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:38:59.116109 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:38:59.134421 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:38:59.134502 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:38:59.178387 1898413 cri.go:89] found id: ""
	I0414 15:38:59.178427 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.178440 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:38:59.178447 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:38:59.178518 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:38:59.216031 1898413 cri.go:89] found id: ""
	I0414 15:38:59.216061 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.216069 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:38:59.216075 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:38:59.216130 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:38:59.259108 1898413 cri.go:89] found id: ""
	I0414 15:38:59.259141 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.259152 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:38:59.259161 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:38:59.259228 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:38:59.297184 1898413 cri.go:89] found id: ""
	I0414 15:38:59.297221 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.297233 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:38:59.297241 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:38:59.297309 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:38:59.337312 1898413 cri.go:89] found id: ""
	I0414 15:38:59.337345 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.337357 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:38:59.337370 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:38:59.337443 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:38:59.374079 1898413 cri.go:89] found id: ""
	I0414 15:38:59.374113 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.374123 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:38:59.374131 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:38:59.374211 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:38:59.410282 1898413 cri.go:89] found id: ""
	I0414 15:38:59.410320 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.410333 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:38:59.410342 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:38:59.410446 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:38:59.455978 1898413 cri.go:89] found id: ""
	I0414 15:38:59.456013 1898413 logs.go:282] 0 containers: []
	W0414 15:38:59.456024 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:38:59.456037 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:38:59.456053 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:38:59.511277 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:38:59.511328 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:38:59.526832 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:38:59.526872 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:38:59.609353 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:38:59.609382 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:38:59.609400 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:38:59.702178 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:38:59.702232 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 15:39:02.275914 1898413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:39:02.291577 1898413 kubeadm.go:597] duration metric: took 4m3.587783966s to restartPrimaryControlPlane
	W0414 15:39:02.291690 1898413 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0414 15:39:02.291723 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 15:39:02.789641 1898413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:39:02.806999 1898413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:39:02.818297 1898413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:39:02.829102 1898413 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:39:02.829123 1898413 kubeadm.go:157] found existing configuration files:
	
	I0414 15:39:02.829174 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:39:02.839117 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:39:02.839180 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:39:02.849811 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:39:02.860242 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:39:02.860327 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:39:02.870741 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:39:02.880988 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:39:02.881075 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:39:02.895518 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:39:02.907549 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:39:02.907633 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:39:02.919116 1898413 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:39:03.003336 1898413 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:39:03.003431 1898413 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:39:03.203950 1898413 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:39:03.204095 1898413 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:39:03.204229 1898413 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:39:03.471715 1898413 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:39:03.474118 1898413 out.go:235]   - Generating certificates and keys ...
	I0414 15:39:03.474241 1898413 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:39:03.474325 1898413 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:39:03.474437 1898413 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 15:39:03.474527 1898413 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 15:39:03.474633 1898413 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 15:39:03.474708 1898413 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 15:39:03.474821 1898413 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 15:39:03.474980 1898413 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 15:39:03.475116 1898413 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 15:39:03.475240 1898413 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 15:39:03.475292 1898413 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 15:39:03.475380 1898413 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:39:03.827410 1898413 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:39:03.961983 1898413 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:39:04.393682 1898413 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:39:04.800834 1898413 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:39:04.827746 1898413 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:39:04.827910 1898413 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:39:04.828011 1898413 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:39:04.976741 1898413 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:39:04.978280 1898413 out.go:235]   - Booting up control plane ...
	I0414 15:39:04.978438 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:39:04.982045 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:39:04.983897 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:39:04.985432 1898413 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:39:04.993708 1898413 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:39:44.996260 1898413 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:39:44.996406 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:39:44.996710 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:39:49.997409 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:39:49.997720 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:39:59.998165 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:39:59.998492 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:40:19.999836 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:40:20.000119 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:41:00.003120 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:41:00.003386 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:41:00.003398 1898413 kubeadm.go:310] 
	I0414 15:41:00.003454 1898413 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:41:00.003505 1898413 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:41:00.003516 1898413 kubeadm.go:310] 
	I0414 15:41:00.003563 1898413 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:41:00.003610 1898413 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:41:00.003755 1898413 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:41:00.003768 1898413 kubeadm.go:310] 
	I0414 15:41:00.003909 1898413 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:41:00.003958 1898413 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:41:00.004004 1898413 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:41:00.004014 1898413 kubeadm.go:310] 
	I0414 15:41:00.004154 1898413 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:41:00.004254 1898413 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:41:00.004461 1898413 kubeadm.go:310] 
	I0414 15:41:00.004624 1898413 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:41:00.004729 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:41:00.004834 1898413 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:41:00.004928 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:41:00.004936 1898413 kubeadm.go:310] 
	I0414 15:41:00.008730 1898413 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:41:00.008864 1898413 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:41:00.008958 1898413 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0414 15:41:00.009679 1898413 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0414 15:41:00.009733 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0414 15:41:02.387961 1898413 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.378197306s)
	I0414 15:41:02.388060 1898413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:41:02.411248 1898413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:41:02.427108 1898413 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:41:02.427131 1898413 kubeadm.go:157] found existing configuration files:
	
	I0414 15:41:02.427181 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:41:02.441668 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:41:02.441762 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:41:02.457485 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:41:02.477126 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:41:02.477215 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:41:02.496130 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:41:02.514981 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:41:02.515058 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:41:02.532977 1898413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:41:02.549085 1898413 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:41:02.549172 1898413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:41:02.566337 1898413 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:41:02.683069 1898413 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0414 15:41:02.683143 1898413 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:41:02.910499 1898413 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:41:02.910683 1898413 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:41:02.910827 1898413 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0414 15:41:03.135954 1898413 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:41:03.137918 1898413 out.go:235]   - Generating certificates and keys ...
	I0414 15:41:03.138055 1898413 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:41:03.138148 1898413 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:41:03.138253 1898413 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0414 15:41:03.138309 1898413 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0414 15:41:03.139027 1898413 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0414 15:41:03.139509 1898413 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0414 15:41:03.140234 1898413 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0414 15:41:03.140922 1898413 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0414 15:41:03.141779 1898413 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0414 15:41:03.142523 1898413 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0414 15:41:03.142662 1898413 kubeadm.go:310] [certs] Using the existing "sa" key
	I0414 15:41:03.142743 1898413 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:41:03.363580 1898413 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:41:03.608956 1898413 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:41:03.763971 1898413 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:41:03.961295 1898413 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:41:03.987989 1898413 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:41:03.989858 1898413 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:41:03.989934 1898413 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:41:04.176285 1898413 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:41:04.178377 1898413 out.go:235]   - Booting up control plane ...
	I0414 15:41:04.178524 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:41:04.199661 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:41:04.199791 1898413 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:41:04.199941 1898413 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:41:04.210418 1898413 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0414 15:41:44.213360 1898413 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0414 15:41:44.214129 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:41:44.214418 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:41:49.217823 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:41:49.218057 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:41:59.216722 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:41:59.217001 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:19.217388 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:42:19.217695 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:59.215921 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:42:59.216197 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:59.216228 1898413 kubeadm.go:310] 
	I0414 15:42:59.216283 1898413 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:42:59.216336 1898413 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:42:59.216342 1898413 kubeadm.go:310] 
	I0414 15:42:59.216389 1898413 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:42:59.216433 1898413 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:42:59.216581 1898413 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:42:59.216592 1898413 kubeadm.go:310] 
	I0414 15:42:59.216725 1898413 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:42:59.216770 1898413 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:42:59.216818 1898413 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:42:59.216822 1898413 kubeadm.go:310] 
	I0414 15:42:59.216907 1898413 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:42:59.217006 1898413 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:42:59.217015 1898413 kubeadm.go:310] 
	I0414 15:42:59.217187 1898413 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:42:59.217303 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:42:59.217409 1898413 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:42:59.217503 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:42:59.217511 1898413 kubeadm.go:310] 
	I0414 15:42:59.219259 1898413 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:59.219407 1898413 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:42:59.219514 1898413 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:42:59.220159 1898413 kubeadm.go:394] duration metric: took 8m0.569569368s to StartCluster
	I0414 15:42:59.220230 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:42:59.220304 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:42:59.296348 1898413 cri.go:89] found id: ""
	I0414 15:42:59.296381 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.296393 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:42:59.296403 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:42:59.296511 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:42:59.357668 1898413 cri.go:89] found id: ""
	I0414 15:42:59.357701 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.357713 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:42:59.357720 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:42:59.357797 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:42:59.408582 1898413 cri.go:89] found id: ""
	I0414 15:42:59.408613 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.408621 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:42:59.408627 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:42:59.408702 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:42:59.457402 1898413 cri.go:89] found id: ""
	I0414 15:42:59.457438 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.457449 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:42:59.457457 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:42:59.457530 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:42:59.508543 1898413 cri.go:89] found id: ""
	I0414 15:42:59.508601 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.508613 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:42:59.508621 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:42:59.508691 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:42:59.557213 1898413 cri.go:89] found id: ""
	I0414 15:42:59.557250 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.557262 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:42:59.557270 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:42:59.557343 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:42:59.607994 1898413 cri.go:89] found id: ""
	I0414 15:42:59.608023 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.608048 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:42:59.608057 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:42:59.608129 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:42:59.657459 1898413 cri.go:89] found id: ""
	I0414 15:42:59.657494 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.657507 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:42:59.657525 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:42:59.657549 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:42:59.723160 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:42:59.723223 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:42:59.743367 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:42:59.743418 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:42:59.876644 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:42:59.876695 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:42:59.876713 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:43:00.032948 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:43:00.032994 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:43:00.086613 1898413 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:43:00.086686 1898413 out.go:270] * 
	* 
	W0414 15:43:00.086782 1898413 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.086809 1898413 out.go:270] * 
	* 
	W0414 15:43:00.087917 1898413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:43:00.091413 1898413 out.go:201] 
	W0414 15:43:00.092767 1898413 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.092825 1898413 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:43:00.092861 1898413 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:43:00.094446 1898413 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-529869 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (298.94646ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-529869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | docker system info                                   |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo cat                    | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo cat                    | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo cat                    | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-036922 sudo                        | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | crio config                                          |                           |         |         |                     |                     |
	| delete  | -p custom-flannel-036922                             | custom-flannel-036922     | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	| start   | -p bridge-036922 --memory=3072                       | bridge-036922             | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-036922                         | enable-default-cni-036922 | jenkins | v1.35.0 | 14 Apr 25 15:42 UTC | 14 Apr 25 15:42 UTC |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 15:42:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 15:42:20.393428 1908903 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:42:20.393707 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.393717 1908903 out.go:358] Setting ErrFile to fd 2...
	I0414 15:42:20.393721 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.394014 1908903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:42:20.394737 1908903 out.go:352] Setting JSON to false
	I0414 15:42:20.396002 1908903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":41084,"bootTime":1744604256,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:42:20.396077 1908903 start.go:139] virtualization: kvm guest
	I0414 15:42:20.398284 1908903 out.go:177] * [bridge-036922] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:42:20.399747 1908903 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:42:20.399774 1908903 notify.go:220] Checking for updates...
	I0414 15:42:20.402506 1908903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:42:20.403700 1908903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:20.404951 1908903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:20.406045 1908903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:42:20.407237 1908903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:42:20.408819 1908903 config.go:182] Loaded profile config "enable-default-cni-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.408920 1908903 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.409003 1908903 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:42:20.409078 1908903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:42:20.449900 1908903 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:42:20.451424 1908903 start.go:297] selected driver: kvm2
	I0414 15:42:20.451445 1908903 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:42:20.451460 1908903 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:42:20.452406 1908903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.452490 1908903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:42:20.470925 1908903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:42:20.470988 1908903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 15:42:20.471237 1908903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:20.471280 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:42:20.471289 1908903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 15:42:20.471347 1908903 start.go:340] cluster config:
	{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:20.471467 1908903 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.473355 1908903 out.go:177] * Starting "bridge-036922" primary control-plane node in "bridge-036922" cluster
	I0414 15:42:18.311367 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:18.311873 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:18.311907 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:18.311830 1907444 retry.go:31] will retry after 1.961785823s: waiting for domain to come up
	I0414 15:42:20.275622 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:20.276217 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:20.276245 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:20.276160 1907444 retry.go:31] will retry after 3.443279587s: waiting for domain to come up
	I0414 15:42:18.552316 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:21.052659 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:20.474918 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:20.474969 1908903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 15:42:20.474980 1908903 cache.go:56] Caching tarball of preloaded images
	I0414 15:42:20.475087 1908903 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:42:20.475100 1908903 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 15:42:20.475200 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:20.475219 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json: {Name:mk46811239729f3d2abef41cf6cd2fb6300eacaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:20.475365 1908903 start.go:360] acquireMachinesLock for bridge-036922: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:42:23.721372 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:23.721981 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:23.722015 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:23.721948 1907444 retry.go:31] will retry after 3.812874947s: waiting for domain to come up
	I0414 15:42:27.536454 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:27.537033 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:27.537056 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:27.537004 1907444 retry.go:31] will retry after 3.540212628s: waiting for domain to come up
	I0414 15:42:23.551530 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:25.552074 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:28.051484 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.627768 1908903 start.go:364] duration metric: took 12.152363514s to acquireMachinesLock for "bridge-036922"
	I0414 15:42:32.627850 1908903 start.go:93] Provisioning new machine with config: &{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:32.627970 1908903 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 15:42:31.081114 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081620 1907421 main.go:141] libmachine: (flannel-036922) found domain IP: 192.168.72.200
	I0414 15:42:31.081647 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has current primary IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081654 1907421 main.go:141] libmachine: (flannel-036922) reserving static IP address...
	I0414 15:42:31.082097 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find host DHCP lease matching {name: "flannel-036922", mac: "52:54:00:47:a6:f3", ip: "192.168.72.200"} in network mk-flannel-036922
	I0414 15:42:31.169991 1907421 main.go:141] libmachine: (flannel-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:31.170026 1907421 main.go:141] libmachine: (flannel-036922) reserved static IP address 192.168.72.200 for domain flannel-036922
	I0414 15:42:31.170038 1907421 main.go:141] libmachine: (flannel-036922) waiting for SSH...
	I0414 15:42:31.173332 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173746 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.173785 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173994 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH client type: external
	I0414 15:42:31.174024 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa (-rw-------)
	I0414 15:42:31.174056 1907421 main.go:141] libmachine: (flannel-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:31.174071 1907421 main.go:141] libmachine: (flannel-036922) DBG | About to run SSH command:
	I0414 15:42:31.174081 1907421 main.go:141] libmachine: (flannel-036922) DBG | exit 0
	I0414 15:42:31.299043 1907421 main.go:141] libmachine: (flannel-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:31.299375 1907421 main.go:141] libmachine: (flannel-036922) KVM machine creation complete
	I0414 15:42:31.299910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:31.300482 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300707 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300937 1907421 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:31.300956 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:31.302412 1907421 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:31.302427 1907421 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:31.302432 1907421 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:31.302437 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.305226 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305622 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.305653 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305832 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.306067 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306262 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306413 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.306582 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.306835 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.306848 1907421 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:31.409981 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.410015 1907421 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:31.410027 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.412803 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413105 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.413155 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413279 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.413504 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413690 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413892 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.414073 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.414440 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.414462 1907421 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:31.519809 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:31.519916 1907421 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:31.519927 1907421 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:31.519936 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520223 1907421 buildroot.go:166] provisioning hostname "flannel-036922"
	I0414 15:42:31.520239 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520436 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.523093 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523484 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.523524 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523722 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.523907 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524062 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524183 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.524321 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.524614 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.524632 1907421 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-036922 && echo "flannel-036922" | sudo tee /etc/hostname
	I0414 15:42:31.645537 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-036922
	
	I0414 15:42:31.645576 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.648224 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648558 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.648593 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648747 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.648942 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649094 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649255 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.649473 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.649681 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.649696 1907421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:31.764596 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.764638 1907421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:31.764666 1907421 buildroot.go:174] setting up certificates
	I0414 15:42:31.764679 1907421 provision.go:84] configureAuth start
	I0414 15:42:31.764694 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.765045 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:31.768031 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768340 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.768368 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768520 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.770840 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771160 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.771189 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771328 1907421 provision.go:143] copyHostCerts
	I0414 15:42:31.771404 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:31.771416 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:31.771486 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:31.771610 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:31.771619 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:31.771644 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:31.771710 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:31.771717 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:31.771741 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:31.771791 1907421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.flannel-036922 san=[127.0.0.1 192.168.72.200 flannel-036922 localhost minikube]
	I0414 15:42:31.968023 1907421 provision.go:177] copyRemoteCerts
	I0414 15:42:31.968092 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:31.968117 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.970932 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971208 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.971239 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971419 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.971624 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.971760 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.971949 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.059121 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:32.086750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 15:42:32.113750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:32.140600 1907421 provision.go:87] duration metric: took 375.905384ms to configureAuth
	I0414 15:42:32.140649 1907421 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:32.140825 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:32.140910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.143669 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144072 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.144098 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144301 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.144503 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144664 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144839 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.145044 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.145348 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.145371 1907421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:32.376226 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:32.376251 1907421 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:32.376267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetURL
	I0414 15:42:32.377737 1907421 main.go:141] libmachine: (flannel-036922) DBG | using libvirt version 6000000
	I0414 15:42:32.380146 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380479 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.380510 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380661 1907421 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:32.380675 1907421 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:32.380683 1907421 client.go:171] duration metric: took 24.152526095s to LocalClient.Create
	I0414 15:42:32.380708 1907421 start.go:167] duration metric: took 24.152593581s to libmachine.API.Create "flannel-036922"
	I0414 15:42:32.380736 1907421 start.go:293] postStartSetup for "flannel-036922" (driver="kvm2")
	I0414 15:42:32.380753 1907421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:32.380784 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.381034 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:32.381060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.383436 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383744 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.383765 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.384128 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.384303 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.384449 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.469641 1907421 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:32.474716 1907421 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:32.474754 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:32.474843 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:32.474963 1907421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:32.475080 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:32.485571 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:32.513908 1907421 start.go:296] duration metric: took 133.150087ms for postStartSetup
	I0414 15:42:32.513976 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:32.514671 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.517434 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.517794 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.517830 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.518116 1907421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/config.json ...
	I0414 15:42:32.518321 1907421 start.go:128] duration metric: took 24.310122388s to createHost
	I0414 15:42:32.518346 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.520587 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.520903 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.520939 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.521138 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.521368 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521508 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521672 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.521818 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.522073 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.522085 1907421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:32.627543 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645352.607238172
	
	I0414 15:42:32.627581 1907421 fix.go:216] guest clock: 1744645352.607238172
	I0414 15:42:32.627603 1907421 fix.go:229] Guest: 2025-04-14 15:42:32.607238172 +0000 UTC Remote: 2025-04-14 15:42:32.518333951 +0000 UTC m=+24.431599100 (delta=88.904221ms)
	I0414 15:42:32.627642 1907421 fix.go:200] guest clock delta is within tolerance: 88.904221ms
	I0414 15:42:32.627654 1907421 start.go:83] releasing machines lock for "flannel-036922", held for 24.419524725s
	I0414 15:42:32.627691 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.628088 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.631249 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.631790 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.631818 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.632042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.632785 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633151 1907421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:32.633227 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.633252 1907421 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:32.633267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.636525 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636562 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636948 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.636985 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.637010 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637085 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637238 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637465 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637483 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637697 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637723 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.637882 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.637900 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.638077 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.717463 1907421 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:32.745427 1907421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:32.909851 1907421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:32.916503 1907421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:32.916578 1907421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:32.933971 1907421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:32.933995 1907421 start.go:495] detecting cgroup driver to use...
	I0414 15:42:32.934071 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:32.952308 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:32.970781 1907421 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:32.970865 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:32.987714 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:33.006216 1907421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:30.551892 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.552139 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:33.157399 1907421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:33.324202 1907421 docker.go:233] disabling docker service ...
	I0414 15:42:33.324273 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:33.341314 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:33.357080 1907421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:33.549837 1907421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:33.699436 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:33.714710 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:33.738926 1907421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:33.739015 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.751493 1907421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:33.751594 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.764325 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.776597 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.789601 1907421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:33.802342 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.813914 1907421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.837591 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.849585 1907421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:33.862417 1907421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:33.862494 1907421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:33.879615 1907421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:33.891734 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:34.014337 1907421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:34.117483 1907421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:34.117570 1907421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:34.123036 1907421 start.go:563] Will wait 60s for crictl version
	I0414 15:42:34.123111 1907421 ssh_runner.go:195] Run: which crictl
	I0414 15:42:34.128066 1907421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:34.173872 1907421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:34.173955 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.210232 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.246653 1907421 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:32.631413 1908903 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 15:42:32.631616 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:32.631698 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:32.649503 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0414 15:42:32.649969 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:32.650582 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:42:32.650606 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:32.651035 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:32.651256 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:32.651415 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:32.651580 1908903 start.go:159] libmachine.API.Create for "bridge-036922" (driver="kvm2")
	I0414 15:42:32.651640 1908903 client.go:168] LocalClient.Create starting
	I0414 15:42:32.651683 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 15:42:32.651736 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651761 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651848 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 15:42:32.651877 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651896 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651923 1908903 main.go:141] libmachine: Running pre-create checks...
	I0414 15:42:32.651944 1908903 main.go:141] libmachine: (bridge-036922) Calling .PreCreateCheck
	I0414 15:42:32.652284 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:32.652746 1908903 main.go:141] libmachine: Creating machine...
	I0414 15:42:32.652761 1908903 main.go:141] libmachine: (bridge-036922) Calling .Create
	I0414 15:42:32.652923 1908903 main.go:141] libmachine: (bridge-036922) creating KVM machine...
	I0414 15:42:32.652944 1908903 main.go:141] libmachine: (bridge-036922) creating network...
	I0414 15:42:32.654276 1908903 main.go:141] libmachine: (bridge-036922) DBG | found existing default KVM network
	I0414 15:42:32.655546 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.655372 1909012 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:fb:6f} reservation:<nil>}
	I0414 15:42:32.656280 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.656199 1909012 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:27:da} reservation:<nil>}
	I0414 15:42:32.657561 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.657462 1909012 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292ac0}
	I0414 15:42:32.657591 1908903 main.go:141] libmachine: (bridge-036922) DBG | created network xml: 
	I0414 15:42:32.657603 1908903 main.go:141] libmachine: (bridge-036922) DBG | <network>
	I0414 15:42:32.657610 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <name>mk-bridge-036922</name>
	I0414 15:42:32.657618 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <dns enable='no'/>
	I0414 15:42:32.657625 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657634 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 15:42:32.657644 1908903 main.go:141] libmachine: (bridge-036922) DBG |     <dhcp>
	I0414 15:42:32.657656 1908903 main.go:141] libmachine: (bridge-036922) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 15:42:32.657665 1908903 main.go:141] libmachine: (bridge-036922) DBG |     </dhcp>
	I0414 15:42:32.657673 1908903 main.go:141] libmachine: (bridge-036922) DBG |   </ip>
	I0414 15:42:32.657685 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657692 1908903 main.go:141] libmachine: (bridge-036922) DBG | </network>
	I0414 15:42:32.657700 1908903 main.go:141] libmachine: (bridge-036922) DBG | 
	I0414 15:42:32.663623 1908903 main.go:141] libmachine: (bridge-036922) DBG | trying to create private KVM network mk-bridge-036922 192.168.61.0/24...
	I0414 15:42:32.748953 1908903 main.go:141] libmachine: (bridge-036922) DBG | private KVM network mk-bridge-036922 192.168.61.0/24 created
	I0414 15:42:32.748994 1908903 main.go:141] libmachine: (bridge-036922) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:32.749036 1908903 main.go:141] libmachine: (bridge-036922) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 15:42:32.749186 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.748956 1909012 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:32.749224 1908903 main.go:141] libmachine: (bridge-036922) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 15:42:33.058633 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.058470 1909012 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa...
	I0414 15:42:33.132442 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132298 1909012 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk...
	I0414 15:42:33.132477 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing magic tar header
	I0414 15:42:33.132492 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing SSH key tar header
	I0414 15:42:33.132503 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132444 1909012 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:33.132598 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922
	I0414 15:42:33.132618 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 15:42:33.132632 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 (perms=drwx------)
	I0414 15:42:33.132653 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 15:42:33.132668 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 15:42:33.132681 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 15:42:33.132691 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 15:42:33.132708 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 15:42:33.132722 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:33.132731 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.132765 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 15:42:33.132797 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 15:42:33.132810 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins
	I0414 15:42:33.132825 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home
	I0414 15:42:33.132858 1908903 main.go:141] libmachine: (bridge-036922) DBG | skipping /home - not owner
	I0414 15:42:33.134361 1908903 main.go:141] libmachine: (bridge-036922) define libvirt domain using xml: 
	I0414 15:42:33.134417 1908903 main.go:141] libmachine: (bridge-036922) <domain type='kvm'>
	I0414 15:42:33.134428 1908903 main.go:141] libmachine: (bridge-036922)   <name>bridge-036922</name>
	I0414 15:42:33.134436 1908903 main.go:141] libmachine: (bridge-036922)   <memory unit='MiB'>3072</memory>
	I0414 15:42:33.134447 1908903 main.go:141] libmachine: (bridge-036922)   <vcpu>2</vcpu>
	I0414 15:42:33.134454 1908903 main.go:141] libmachine: (bridge-036922)   <features>
	I0414 15:42:33.134476 1908903 main.go:141] libmachine: (bridge-036922)     <acpi/>
	I0414 15:42:33.134491 1908903 main.go:141] libmachine: (bridge-036922)     <apic/>
	I0414 15:42:33.134498 1908903 main.go:141] libmachine: (bridge-036922)     <pae/>
	I0414 15:42:33.134503 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134515 1908903 main.go:141] libmachine: (bridge-036922)   </features>
	I0414 15:42:33.134526 1908903 main.go:141] libmachine: (bridge-036922)   <cpu mode='host-passthrough'>
	I0414 15:42:33.134533 1908903 main.go:141] libmachine: (bridge-036922)   
	I0414 15:42:33.134542 1908903 main.go:141] libmachine: (bridge-036922)   </cpu>
	I0414 15:42:33.134548 1908903 main.go:141] libmachine: (bridge-036922)   <os>
	I0414 15:42:33.134557 1908903 main.go:141] libmachine: (bridge-036922)     <type>hvm</type>
	I0414 15:42:33.134591 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='cdrom'/>
	I0414 15:42:33.134612 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='hd'/>
	I0414 15:42:33.134622 1908903 main.go:141] libmachine: (bridge-036922)     <bootmenu enable='no'/>
	I0414 15:42:33.134628 1908903 main.go:141] libmachine: (bridge-036922)   </os>
	I0414 15:42:33.134637 1908903 main.go:141] libmachine: (bridge-036922)   <devices>
	I0414 15:42:33.134649 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='cdrom'>
	I0414 15:42:33.134666 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/boot2docker.iso'/>
	I0414 15:42:33.134677 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hdc' bus='scsi'/>
	I0414 15:42:33.134686 1908903 main.go:141] libmachine: (bridge-036922)       <readonly/>
	I0414 15:42:33.134695 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134704 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='disk'>
	I0414 15:42:33.134716 1908903 main.go:141] libmachine: (bridge-036922)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 15:42:33.134734 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk'/>
	I0414 15:42:33.134745 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hda' bus='virtio'/>
	I0414 15:42:33.134753 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134763 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134772 1908903 main.go:141] libmachine: (bridge-036922)       <source network='mk-bridge-036922'/>
	I0414 15:42:33.134782 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134790 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134798 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134804 1908903 main.go:141] libmachine: (bridge-036922)       <source network='default'/>
	I0414 15:42:33.134810 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134823 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134831 1908903 main.go:141] libmachine: (bridge-036922)     <serial type='pty'>
	I0414 15:42:33.134841 1908903 main.go:141] libmachine: (bridge-036922)       <target port='0'/>
	I0414 15:42:33.134851 1908903 main.go:141] libmachine: (bridge-036922)     </serial>
	I0414 15:42:33.134860 1908903 main.go:141] libmachine: (bridge-036922)     <console type='pty'>
	I0414 15:42:33.134870 1908903 main.go:141] libmachine: (bridge-036922)       <target type='serial' port='0'/>
	I0414 15:42:33.134878 1908903 main.go:141] libmachine: (bridge-036922)     </console>
	I0414 15:42:33.134887 1908903 main.go:141] libmachine: (bridge-036922)     <rng model='virtio'>
	I0414 15:42:33.134893 1908903 main.go:141] libmachine: (bridge-036922)       <backend model='random'>/dev/random</backend>
	I0414 15:42:33.134901 1908903 main.go:141] libmachine: (bridge-036922)     </rng>
	I0414 15:42:33.134928 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134945 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134958 1908903 main.go:141] libmachine: (bridge-036922)   </devices>
	I0414 15:42:33.134967 1908903 main.go:141] libmachine: (bridge-036922) </domain>
	I0414 15:42:33.134981 1908903 main.go:141] libmachine: (bridge-036922) 
	I0414 15:42:33.139633 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:ce:30:4b in network default
	I0414 15:42:33.140227 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.140266 1908903 main.go:141] libmachine: (bridge-036922) starting domain...
	I0414 15:42:33.140279 1908903 main.go:141] libmachine: (bridge-036922) ensuring networks are active...
	I0414 15:42:33.140917 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network default is active
	I0414 15:42:33.141340 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network mk-bridge-036922 is active
	I0414 15:42:33.142027 1908903 main.go:141] libmachine: (bridge-036922) getting domain XML...
	I0414 15:42:33.143089 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.536114 1908903 main.go:141] libmachine: (bridge-036922) waiting for IP...
	I0414 15:42:33.536974 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.537437 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.537518 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.537440 1909012 retry.go:31] will retry after 243.753367ms: waiting for domain to come up
	I0414 15:42:33.783413 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.784074 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.784104 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.784044 1909012 retry.go:31] will retry after 339.050332ms: waiting for domain to come up
	I0414 15:42:34.124346 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.124819 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.124847 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.124793 1909012 retry.go:31] will retry after 477.978489ms: waiting for domain to come up
	I0414 15:42:34.604689 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.605405 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.605478 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.605396 1909012 retry.go:31] will retry after 606.717012ms: waiting for domain to come up
	I0414 15:42:35.214566 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.215302 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.215335 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.215304 1909012 retry.go:31] will retry after 585.677483ms: waiting for domain to come up
	I0414 15:42:34.248060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:34.251061 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251494 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:34.251536 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251790 1907421 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:34.257345 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:34.271269 1907421 kubeadm.go:883] updating cluster {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:34.271419 1907421 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:34.271491 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:34.310047 1907421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:34.310148 1907421 ssh_runner.go:195] Run: which lz4
	I0414 15:42:34.314914 1907421 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:34.319663 1907421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:34.319706 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:36.005122 1907421 crio.go:462] duration metric: took 1.690246926s to copy over tarball
	I0414 15:42:36.005231 1907421 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:42:34.553205 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:37.052635 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:38.486201 1907421 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.480920023s)
	I0414 15:42:38.486301 1907421 crio.go:469] duration metric: took 2.481131687s to extract the tarball
	I0414 15:42:38.486328 1907421 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:42:38.536845 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:38.588854 1907421 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 15:42:38.588889 1907421 cache_images.go:84] Images are preloaded, skipping loading
	I0414 15:42:38.588901 1907421 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0414 15:42:38.589066 1907421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-036922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 15:42:38.589161 1907421 ssh_runner.go:195] Run: crio config
	I0414 15:42:38.639561 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:38.639596 1907421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:42:38.639626 1907421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-036922 NodeName:flannel-036922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:42:38.639887 1907421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-036922"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:42:38.640037 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 15:42:38.651901 1907421 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:42:38.651997 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:42:38.662036 1907421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 15:42:38.680585 1907421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:42:38.698787 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 15:42:38.721640 1907421 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0414 15:42:38.726592 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:38.740768 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:38.899231 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:38.918385 1907421 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922 for IP: 192.168.72.200
	I0414 15:42:38.918418 1907421 certs.go:194] generating shared ca certs ...
	I0414 15:42:38.918437 1907421 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:38.918692 1907421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:42:38.918762 1907421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:42:38.918790 1907421 certs.go:256] generating profile certs ...
	I0414 15:42:38.918873 1907421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key
	I0414 15:42:38.918893 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt with IP's: []
	I0414 15:42:39.040105 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt ...
	I0414 15:42:39.040138 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: {Name:mk2541d497355f75330e1e8d45ca7c05c9151252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040344 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key ...
	I0414 15:42:39.040361 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key: {Name:mk380b7bf852abf1b8988acb006ad6fc4e37f4e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040469 1907421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca
	I0414 15:42:39.040487 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0414 15:42:39.250195 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca ...
	I0414 15:42:39.250233 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca: {Name:mkbe9b8905a248872f1e8ad1d846ab894bf1ccb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250430 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca ...
	I0414 15:42:39.250443 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca: {Name:mk00eed7dd27975a2c63b91d58b73bd49c86808b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250518 1907421 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt
	I0414 15:42:39.250615 1907421 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key
	I0414 15:42:39.250679 1907421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key
	I0414 15:42:39.250697 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt with IP's: []
	I0414 15:42:39.442422 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt ...
	I0414 15:42:39.442455 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt: {Name:mka0a36bc874e1164bc79c06b6893dbd73138c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442664 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key ...
	I0414 15:42:39.442682 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key: {Name:mkee6ef65a530aee53bdaac10b3fb60ee09dbe64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442891 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:42:39.442929 1907421 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:42:39.442940 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:42:39.442967 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:42:39.442990 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:42:39.443010 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:42:39.443051 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:39.443680 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:42:39.474252 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:42:39.504144 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:42:39.530953 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:42:39.560025 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 15:42:39.592232 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:42:39.640260 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:42:39.670285 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:42:39.698670 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:42:39.726986 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:42:39.754399 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:42:39.788251 1907421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:42:39.807950 1907421 ssh_runner.go:195] Run: openssl version
	I0414 15:42:39.814532 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:42:39.827541 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834201 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834285 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.841587 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:42:39.853993 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:42:39.879246 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884226 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884303 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.890625 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:42:39.903508 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:42:39.915981 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921299 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921368 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.927524 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:42:39.939848 1907421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:42:39.945029 1907421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:42:39.945115 1907421 kubeadm.go:392] StartCluster: {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:39.945228 1907421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:42:39.945336 1907421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:42:39.993625 1907421 cri.go:89] found id: ""
	I0414 15:42:39.993726 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:42:40.007930 1907421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:42:40.022297 1907421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:42:40.033983 1907421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:42:40.034008 1907421 kubeadm.go:157] found existing configuration files:
	
	I0414 15:42:40.034060 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:42:40.044411 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:42:40.044493 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:42:40.057768 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:42:40.068947 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:42:40.069049 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:42:40.080075 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.090907 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:42:40.090972 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.102034 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:42:40.113045 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:42:40.113105 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:42:40.123704 1907421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:42:40.185411 1907421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 15:42:40.185554 1907421 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:42:40.312075 1907421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:42:40.312258 1907421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:42:40.312435 1907421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 15:42:40.324898 1907421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:42:35.802698 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.803793 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.803828 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.803707 1909012 retry.go:31] will retry after 741.40736ms: waiting for domain to come up
	I0414 15:42:36.546572 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:36.547205 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:36.547270 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:36.547183 1909012 retry.go:31] will retry after 1.039019091s: waiting for domain to come up
	I0414 15:42:37.587454 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:37.588056 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:37.588092 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:37.588030 1909012 retry.go:31] will retry after 1.343543316s: waiting for domain to come up
	I0414 15:42:38.933902 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:38.934408 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:38.934499 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:38.934406 1909012 retry.go:31] will retry after 1.727468698s: waiting for domain to come up
	I0414 15:42:40.461045 1907421 out.go:235]   - Generating certificates and keys ...
	I0414 15:42:40.461189 1907421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:42:40.461295 1907421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:42:40.461411 1907421 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:42:40.576540 1907421 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:42:41.022193 1907421 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:42:41.083437 1907421 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:42:41.196088 1907421 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:42:41.196393 1907421 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.305312 1907421 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:42:41.305484 1907421 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.499140 1907421 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:42:41.648257 1907421 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:42:41.792405 1907421 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:42:41.792718 1907421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:42:41.986714 1907421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:42:42.087153 1907421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 15:42:42.240947 1907421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:42:42.386910 1907421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:42:42.522160 1907421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:42:42.523999 1907421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:42:42.528115 1907421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:42:42.574611 1907421 out.go:235]   - Booting up control plane ...
	I0414 15:42:42.574762 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:42:42.574856 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:42:42.574940 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:42:42.575132 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:42:42.575258 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:42:42.575350 1907421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:42:42.720695 1907421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 15:42:42.720861 1907421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 15:42:39.553503 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:41.567599 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:40.664501 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:40.665113 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:40.665156 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:40.665097 1909012 retry.go:31] will retry after 2.255462045s: waiting for domain to come up
	I0414 15:42:42.921827 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:42.922516 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:42.922554 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:42.922480 1909012 retry.go:31] will retry after 2.269647989s: waiting for domain to come up
	I0414 15:42:45.194050 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:45.194621 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:45.194654 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:45.194559 1909012 retry.go:31] will retry after 2.479039637s: waiting for domain to come up
	I0414 15:42:44.113357 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:45.058678 1905530 pod_ready.go:93] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.058714 1905530 pod_ready.go:82] duration metric: took 33.01340484s for pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.058732 1905530 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.061628 1905530 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061664 1905530 pod_ready.go:82] duration metric: took 2.923616ms for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	E0414 15:42:45.061680 1905530 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061691 1905530 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070770 1905530 pod_ready.go:93] pod "etcd-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.070808 1905530 pod_ready.go:82] duration metric: took 9.101557ms for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070826 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079164 1905530 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.079198 1905530 pod_ready.go:82] duration metric: took 8.362407ms for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079213 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087476 1905530 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.087505 1905530 pod_ready.go:82] duration metric: took 8.282442ms for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087518 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249123 1905530 pod_ready.go:93] pod "kube-proxy-cf9hn" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.249155 1905530 pod_ready.go:82] duration metric: took 161.628764ms for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249170 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650160 1905530 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.650266 1905530 pod_ready.go:82] duration metric: took 401.084136ms for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650296 1905530 pod_ready.go:39] duration metric: took 33.615016594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:42:45.650331 1905530 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:42:45.650448 1905530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:42:45.673971 1905530 api_server.go:72] duration metric: took 34.576366052s to wait for apiserver process to appear ...
	I0414 15:42:45.674014 1905530 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:42:45.674039 1905530 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0414 15:42:45.682032 1905530 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0414 15:42:45.683306 1905530 api_server.go:141] control plane version: v1.32.2
	I0414 15:42:45.683334 1905530 api_server.go:131] duration metric: took 9.31155ms to wait for apiserver health ...
	I0414 15:42:45.683345 1905530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:42:45.851783 1905530 system_pods.go:59] 7 kube-system pods found
	I0414 15:42:45.851838 1905530 system_pods.go:61] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:45.851847 1905530 system_pods.go:61] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:45.851855 1905530 system_pods.go:61] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:45.851861 1905530 system_pods.go:61] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:45.851870 1905530 system_pods.go:61] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:45.851875 1905530 system_pods.go:61] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:45.851883 1905530 system_pods.go:61] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:45.851892 1905530 system_pods.go:74] duration metric: took 168.539138ms to wait for pod list to return data ...
	I0414 15:42:45.851906 1905530 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:42:46.051425 1905530 default_sa.go:45] found service account: "default"
	I0414 15:42:46.051460 1905530 default_sa.go:55] duration metric: took 199.54254ms for default service account to be created ...
	I0414 15:42:46.051473 1905530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:42:46.251287 1905530 system_pods.go:86] 7 kube-system pods found
	I0414 15:42:46.251414 1905530 system_pods.go:89] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:46.251431 1905530 system_pods.go:89] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:46.251438 1905530 system_pods.go:89] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:46.251447 1905530 system_pods.go:89] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:46.251454 1905530 system_pods.go:89] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:46.251459 1905530 system_pods.go:89] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:46.251465 1905530 system_pods.go:89] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:46.251476 1905530 system_pods.go:126] duration metric: took 199.99443ms to wait for k8s-apps to be running ...
	I0414 15:42:46.251491 1905530 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:42:46.251557 1905530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:42:46.272907 1905530 system_svc.go:56] duration metric: took 21.403314ms WaitForService to wait for kubelet
	I0414 15:42:46.272947 1905530 kubeadm.go:582] duration metric: took 35.175353213s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:46.272975 1905530 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:42:46.449997 1905530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:42:46.450040 1905530 node_conditions.go:123] node cpu capacity is 2
	I0414 15:42:46.450061 1905530 node_conditions.go:105] duration metric: took 177.079158ms to run NodePressure ...
	I0414 15:42:46.450077 1905530 start.go:241] waiting for startup goroutines ...
	I0414 15:42:46.450088 1905530 start.go:246] waiting for cluster config update ...
	I0414 15:42:46.450103 1905530 start.go:255] writing updated cluster config ...
	I0414 15:42:46.450597 1905530 ssh_runner.go:195] Run: rm -f paused
	I0414 15:42:46.505249 1905530 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:42:46.508181 1905530 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-036922" cluster and "default" namespace by default
	I0414 15:42:43.225629 1907421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.246285ms
	I0414 15:42:43.225795 1907421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 15:42:49.223859 1907421 kubeadm.go:310] [api-check] The API server is healthy after 6.002939425s
	I0414 15:42:49.246703 1907421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 15:42:49.269556 1907421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 15:42:49.315606 1907421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 15:42:49.315885 1907421 kubeadm.go:310] [mark-control-plane] Marking the node flannel-036922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 15:42:49.332520 1907421 kubeadm.go:310] [bootstrap-token] Using token: 6dsy98.vc3wpm9di98p1e2l
	I0414 15:42:47.675403 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:47.675860 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:47.675916 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:47.675831 1909012 retry.go:31] will retry after 3.188398794s: waiting for domain to come up
	I0414 15:42:49.335286 1907421 out.go:235]   - Configuring RBAC rules ...
	I0414 15:42:49.335480 1907421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 15:42:49.342167 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 15:42:49.352554 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 15:42:49.361630 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 15:42:49.366627 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 15:42:49.372335 1907421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 15:42:49.632892 1907421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 15:42:50.092146 1907421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 15:42:50.689823 1907421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 15:42:50.691428 1907421 kubeadm.go:310] 
	I0414 15:42:50.691533 1907421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 15:42:50.691545 1907421 kubeadm.go:310] 
	I0414 15:42:50.691654 1907421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 15:42:50.691666 1907421 kubeadm.go:310] 
	I0414 15:42:50.691717 1907421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 15:42:50.691812 1907421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 15:42:50.691896 1907421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 15:42:50.691906 1907421 kubeadm.go:310] 
	I0414 15:42:50.692009 1907421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 15:42:50.692042 1907421 kubeadm.go:310] 
	I0414 15:42:50.692107 1907421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 15:42:50.692120 1907421 kubeadm.go:310] 
	I0414 15:42:50.692187 1907421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 15:42:50.692272 1907421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 15:42:50.692368 1907421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 15:42:50.692381 1907421 kubeadm.go:310] 
	I0414 15:42:50.692494 1907421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 15:42:50.692586 1907421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 15:42:50.692598 1907421 kubeadm.go:310] 
	I0414 15:42:50.692692 1907421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.692847 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 15:42:50.692890 1907421 kubeadm.go:310] 	--control-plane 
	I0414 15:42:50.692903 1907421 kubeadm.go:310] 
	I0414 15:42:50.693022 1907421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 15:42:50.693031 1907421 kubeadm.go:310] 
	I0414 15:42:50.693144 1907421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.693291 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 15:42:50.693806 1907421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:50.694067 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:50.696952 1907421 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 15:42:50.698346 1907421 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 15:42:50.706416 1907421 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 15:42:50.706438 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 15:42:50.727656 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 15:42:51.287720 1907421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:42:51.287835 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.287871 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-036922 minikube.k8s.io/updated_at=2025_04_14T15_42_51_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=flannel-036922 minikube.k8s.io/primary=true
	I0414 15:42:51.430599 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.430598 1907421 ops.go:34] apiserver oom_adj: -16
	I0414 15:42:51.930825 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.430933 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.931267 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.431500 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.931720 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.431756 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.561136 1907421 kubeadm.go:1113] duration metric: took 3.273384012s to wait for elevateKubeSystemPrivileges
	I0414 15:42:54.561187 1907421 kubeadm.go:394] duration metric: took 14.616077815s to StartCluster
	I0414 15:42:54.561215 1907421 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.561317 1907421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:54.562809 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.563052 1907421 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:54.563065 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 15:42:54.563117 1907421 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:42:54.563242 1907421 addons.go:69] Setting storage-provisioner=true in profile "flannel-036922"
	I0414 15:42:54.563265 1907421 addons.go:238] Setting addon storage-provisioner=true in "flannel-036922"
	I0414 15:42:54.563273 1907421 addons.go:69] Setting default-storageclass=true in profile "flannel-036922"
	I0414 15:42:54.563300 1907421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-036922"
	I0414 15:42:54.563305 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.563335 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:54.563788 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563838 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.563865 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563907 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.566159 1907421 out.go:177] * Verifying Kubernetes components...
	I0414 15:42:54.567701 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:54.582661 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0414 15:42:54.583246 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.583768 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.583805 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.584263 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.584496 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.585593 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0414 15:42:54.586151 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.586695 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.586721 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.587156 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.587767 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.587823 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.588816 1907421 addons.go:238] Setting addon default-storageclass=true in "flannel-036922"
	I0414 15:42:54.588862 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.589169 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.589217 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.605944 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0414 15:42:54.605986 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0414 15:42:54.606442 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.606714 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.607143 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607160 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607282 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607308 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607611 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607729 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607824 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.608193 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.608234 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.610044 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.612210 1907421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:42:50.867819 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:50.868522 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:50.868555 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:50.868467 1909012 retry.go:31] will retry after 3.520845781s: waiting for domain to come up
	I0414 15:42:54.391586 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392265 1908903 main.go:141] libmachine: (bridge-036922) found domain IP: 192.168.61.165
	I0414 15:42:54.392301 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has current primary IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392309 1908903 main.go:141] libmachine: (bridge-036922) reserving static IP address...
	I0414 15:42:54.392694 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find host DHCP lease matching {name: "bridge-036922", mac: "52:54:00:d8:e5:52", ip: "192.168.61.165"} in network mk-bridge-036922
	I0414 15:42:54.493139 1908903 main.go:141] libmachine: (bridge-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:54.493176 1908903 main.go:141] libmachine: (bridge-036922) reserved static IP address 192.168.61.165 for domain bridge-036922
	I0414 15:42:54.493184 1908903 main.go:141] libmachine: (bridge-036922) waiting for SSH...
	I0414 15:42:54.496732 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497256 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.497289 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497438 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH client type: external
	I0414 15:42:54.497470 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa (-rw-------)
	I0414 15:42:54.497515 1908903 main.go:141] libmachine: (bridge-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:54.497529 1908903 main.go:141] libmachine: (bridge-036922) DBG | About to run SSH command:
	I0414 15:42:54.497542 1908903 main.go:141] libmachine: (bridge-036922) DBG | exit 0
	I0414 15:42:54.628504 1908903 main.go:141] libmachine: (bridge-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:54.628809 1908903 main.go:141] libmachine: (bridge-036922) KVM machine creation complete
	I0414 15:42:54.629054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:54.629681 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630072 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630332 1908903 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:54.630347 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:42:54.632867 1908903 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:54.632882 1908903 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:54.632889 1908903 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:54.632896 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.637477 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638308 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.638311 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.638423 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638557 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638771 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638949 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.639184 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.639458 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.639474 1908903 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:54.750695 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:54.750726 1908903 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:54.750740 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.754154 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.754756 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.754859 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.755083 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.755305 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755456 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755636 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.755854 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.756066 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.756078 1908903 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:54.871796 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:54.871901 1908903 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:54.871917 1908903 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:54.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872246 1908903 buildroot.go:166] provisioning hostname "bridge-036922"
	I0414 15:42:54.872272 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872483 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.875743 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876125 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.876156 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876386 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.876633 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876832 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876998 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.877181 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.877502 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.877523 1908903 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-036922 && echo "bridge-036922" | sudo tee /etc/hostname
	I0414 15:42:55.000057 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-036922
	
	I0414 15:42:55.000093 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.003879 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004436 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.004467 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004819 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.005054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005254 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005507 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.005701 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.005995 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.006031 1908903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:55.128677 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:55.128716 1908903 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:55.128743 1908903 buildroot.go:174] setting up certificates
	I0414 15:42:55.128772 1908903 provision.go:84] configureAuth start
	I0414 15:42:55.128791 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:55.129195 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.132674 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133237 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.133295 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133459 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.137559 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138052 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.138085 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138322 1908903 provision.go:143] copyHostCerts
	I0414 15:42:55.138401 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:55.138427 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:55.138499 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:55.138639 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:55.138652 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:55.138695 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:55.138851 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:55.138863 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:55.138888 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:55.139002 1908903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.bridge-036922 san=[127.0.0.1 192.168.61.165 bridge-036922 localhost minikube]
	I0414 15:42:55.169326 1908903 provision.go:177] copyRemoteCerts
	I0414 15:42:55.169402 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:55.169429 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.172809 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173239 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.173270 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173706 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.174030 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.174255 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.174485 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.261123 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:55.288685 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 15:42:55.316648 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:55.346718 1908903 provision.go:87] duration metric: took 217.897994ms to configureAuth
	I0414 15:42:55.346759 1908903 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:55.347050 1908903 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:55.347158 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.350409 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.350855 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.350888 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.351139 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.351328 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351559 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351722 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.351895 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.352172 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.352196 1908903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:54.613578 1907421 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:54.613601 1907421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:42:54.613625 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.617705 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618134 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.618154 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618488 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.618717 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.618939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.619103 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.627890 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0414 15:42:54.628364 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.628827 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.628849 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.629832 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.630200 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.632595 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.633055 1907421 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:54.633074 1907421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:42:54.633096 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.637402 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.637882 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.637912 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.638627 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.638825 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638994 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.639153 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.824401 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:54.824485 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 15:42:54.848222 1907421 node_ready.go:35] waiting up to 15m0s for node "flannel-036922" to be "Ready" ...
	I0414 15:42:55.016349 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:55.024812 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:55.334347 1907421 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 15:42:55.469300 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469338 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.469832 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.469875 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.469885 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.469894 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469915 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.470211 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.470226 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.470243 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494538 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.494593 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.494941 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.494960 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494989 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.843405 1907421 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-036922" context rescaled to 1 replicas
	I0414 15:42:55.852113 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852145 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852433 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852455 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852467 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852475 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852855 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852876 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852900 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.855070 1907421 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 15:42:55.609672 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:55.609708 1908903 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:55.609720 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetURL
	I0414 15:42:55.611018 1908903 main.go:141] libmachine: (bridge-036922) DBG | using libvirt version 6000000
	I0414 15:42:55.613407 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.613780 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.613807 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.614012 1908903 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:55.614034 1908903 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:55.614045 1908903 client.go:171] duration metric: took 22.962392414s to LocalClient.Create
	I0414 15:42:55.614118 1908903 start.go:167] duration metric: took 22.96254203s to libmachine.API.Create "bridge-036922"
	I0414 15:42:55.614140 1908903 start.go:293] postStartSetup for "bridge-036922" (driver="kvm2")
	I0414 15:42:55.614154 1908903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:55.614196 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.614557 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:55.614591 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.617351 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617730 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.617783 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617881 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.618095 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.618279 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.618457 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.706758 1908903 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:55.711737 1908903 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:55.711775 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:55.711864 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:55.711967 1908903 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:55.712104 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:55.724874 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:55.754120 1908903 start.go:296] duration metric: took 139.933679ms for postStartSetup
	I0414 15:42:55.754193 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:55.754932 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.757984 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758267 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.758297 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758631 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:55.758849 1908903 start.go:128] duration metric: took 23.13086309s to createHost
	I0414 15:42:55.758880 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.761734 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762225 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.762256 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762495 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.762688 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.762944 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.763100 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.763340 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.763660 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.763680 1908903 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:55.871836 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645375.840729325
	
	I0414 15:42:55.871865 1908903 fix.go:216] guest clock: 1744645375.840729325
	I0414 15:42:55.871875 1908903 fix.go:229] Guest: 2025-04-14 15:42:55.840729325 +0000 UTC Remote: 2025-04-14 15:42:55.758864102 +0000 UTC m=+35.409485075 (delta=81.865223ms)
	I0414 15:42:55.871904 1908903 fix.go:200] guest clock delta is within tolerance: 81.865223ms
	I0414 15:42:55.871910 1908903 start.go:83] releasing machines lock for "bridge-036922", held for 23.244108969s
	I0414 15:42:55.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.872246 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.875616 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876069 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.876099 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876330 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.876969 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877174 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877292 1908903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:55.877339 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.877479 1908903 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:55.877515 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.880495 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880821 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880916 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.880943 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881164 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881301 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.881322 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881353 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881480 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.881545 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881643 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.881712 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881911 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.882048 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.986199 1908903 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:55.993392 1908903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:56.164978 1908903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:56.172178 1908903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:56.172282 1908903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:56.197933 1908903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:56.197965 1908903 start.go:495] detecting cgroup driver to use...
	I0414 15:42:56.198045 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:56.220424 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:56.238850 1908903 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:56.238925 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:56.258562 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:56.281276 1908903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:56.446192 1908903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:56.624912 1908903 docker.go:233] disabling docker service ...
	I0414 15:42:56.624983 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:56.646632 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:56.661759 1908903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:56.821178 1908903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:56.960834 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:56.976444 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:57.000020 1908903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:57.000107 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.012798 1908903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:57.012878 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.024940 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.037307 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.049273 1908903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:57.061679 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.073870 1908903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.092514 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.104956 1908903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:57.115727 1908903 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:57.115813 1908903 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:57.133078 1908903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:57.144441 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:57.281237 1908903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:57.385608 1908903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:57.385708 1908903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:57.391600 1908903 start.go:563] Will wait 60s for crictl version
	I0414 15:42:57.391684 1908903 ssh_runner.go:195] Run: which crictl
	I0414 15:42:57.396066 1908903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:57.436559 1908903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:57.436662 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.466242 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.506266 1908903 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:55.856560 1907421 addons.go:514] duration metric: took 1.293454428s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 15:42:56.852426 1907421 node_ready.go:53] node "flannel-036922" has status "Ready":"False"
	I0414 15:42:59.215921 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:42:59.216197 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:59.216228 1898413 kubeadm.go:310] 
	I0414 15:42:59.216283 1898413 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:42:59.216336 1898413 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:42:59.216342 1898413 kubeadm.go:310] 
	I0414 15:42:59.216389 1898413 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:42:59.216433 1898413 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:42:59.216581 1898413 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:42:59.216592 1898413 kubeadm.go:310] 
	I0414 15:42:59.216725 1898413 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:42:59.216770 1898413 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:42:59.216818 1898413 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:42:59.216822 1898413 kubeadm.go:310] 
	I0414 15:42:59.216907 1898413 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:42:59.217006 1898413 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:42:59.217015 1898413 kubeadm.go:310] 
	I0414 15:42:59.217187 1898413 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:42:59.217303 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:42:59.217409 1898413 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:42:59.217503 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:42:59.217511 1898413 kubeadm.go:310] 
	I0414 15:42:59.219259 1898413 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:59.219407 1898413 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:42:59.219514 1898413 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:42:59.220159 1898413 kubeadm.go:394] duration metric: took 8m0.569569368s to StartCluster
	I0414 15:42:59.220230 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:42:59.220304 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:42:59.296348 1898413 cri.go:89] found id: ""
	I0414 15:42:59.296381 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.296393 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:42:59.296403 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:42:59.296511 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:42:59.357668 1898413 cri.go:89] found id: ""
	I0414 15:42:59.357701 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.357713 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:42:59.357720 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:42:59.357797 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:42:59.408582 1898413 cri.go:89] found id: ""
	I0414 15:42:59.408613 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.408621 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:42:59.408627 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:42:59.408702 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:42:59.457402 1898413 cri.go:89] found id: ""
	I0414 15:42:59.457438 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.457449 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:42:59.457457 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:42:59.457530 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:42:59.508543 1898413 cri.go:89] found id: ""
	I0414 15:42:59.508601 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.508613 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:42:59.508621 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:42:59.508691 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:42:59.557213 1898413 cri.go:89] found id: ""
	I0414 15:42:59.557250 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.557262 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:42:59.557270 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:42:59.557343 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:42:59.607994 1898413 cri.go:89] found id: ""
	I0414 15:42:59.608023 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.608048 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:42:59.608057 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:42:59.608129 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:42:59.657459 1898413 cri.go:89] found id: ""
	I0414 15:42:59.657494 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.657507 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:42:59.657525 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:42:59.657549 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:42:59.723160 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:42:59.723223 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:42:59.743367 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:42:59.743418 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:42:59.876644 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:42:59.876695 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:42:59.876713 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:43:00.032948 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:43:00.032994 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:43:00.086613 1898413 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:43:00.086686 1898413 out.go:270] * 
	W0414 15:43:00.086782 1898413 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.086809 1898413 out.go:270] * 
	W0414 15:43:00.087917 1898413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:43:00.091413 1898413 out.go:201] 
	W0414 15:43:00.092767 1898413 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.092825 1898413 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:43:00.092861 1898413 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:43:00.094446 1898413 out.go:201] 
	I0414 15:42:57.507650 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:57.510669 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511148 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:57.511176 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511409 1908903 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:57.516092 1908903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:57.529590 1908903 kubeadm.go:883] updating cluster {Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:57.529766 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:57.529845 1908903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:57.572139 1908903 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:57.572227 1908903 ssh_runner.go:195] Run: which lz4
	I0414 15:42:57.576627 1908903 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:57.581291 1908903 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:57.581343 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:59.289654 1908903 crio.go:462] duration metric: took 1.713065895s to copy over tarball
	I0414 15:42:59.289872 1908903 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.450788499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645381450745546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca173863-6998-448c-9cf2-d2fd631ab0a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.452660149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=263941a8-29fb-40a9-a682-a156c3e6d712 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.452772447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=263941a8-29fb-40a9-a682-a156c3e6d712 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.452840781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=263941a8-29fb-40a9-a682-a156c3e6d712 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.501750396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d973640-a25d-4102-94b0-96110f102737 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.501844070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d973640-a25d-4102-94b0-96110f102737 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.503660974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d1c0401-9821-4d9a-83bd-3bc697eaaffc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.504288694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645381504252305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d1c0401-9821-4d9a-83bd-3bc697eaaffc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.505275957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdcc72d5-8456-42c7-aaa9-ead46bfea6f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.505354165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdcc72d5-8456-42c7-aaa9-ead46bfea6f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.505429046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bdcc72d5-8456-42c7-aaa9-ead46bfea6f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.548233801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7adbdd7f-ad6b-4e76-81e6-1c8969bda270 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.548338197Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7adbdd7f-ad6b-4e76-81e6-1c8969bda270 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.549798473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32c265c4-04d4-4639-8330-89836661738d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.550274871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645381550251038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32c265c4-04d4-4639-8330-89836661738d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.551065987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa015138-b144-4a2c-9181-bfdac3eb94bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.551132418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa015138-b144-4a2c-9181-bfdac3eb94bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.551165976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa015138-b144-4a2c-9181-bfdac3eb94bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.591880331Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=040f908d-9150-4501-b7dc-c347eac49e3a name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.591967291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=040f908d-9150-4501-b7dc-c347eac49e3a name=/runtime.v1.RuntimeService/Version
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.593343329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e7dc6e2-3eb9-437d-a95d-89be090884bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.593738161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645381593709796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e7dc6e2-3eb9-437d-a95d-89be090884bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.594441195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2462df5a-2ff0-4b98-b7db-fea41dfce041 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.594512603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2462df5a-2ff0-4b98-b7db-fea41dfce041 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:43:01 old-k8s-version-529869 crio[627]: time="2025-04-14 15:43:01.594549029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2462df5a-2ff0-4b98-b7db-fea41dfce041 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 15:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057458] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.052593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.402276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.030956] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.742898] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.855862] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065964] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065397] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.221422] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.162433] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.282594] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.876323] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.067338] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.988800] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[Apr14 15:35] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 15:39] systemd-fstab-generator[4999]: Ignoring "noauto" option for root device
	[Apr14 15:41] systemd-fstab-generator[5283]: Ignoring "noauto" option for root device
	[  +0.099182] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:43:01 up 8 min,  0 users,  load average: 0.02, 0.13, 0.08
	Linux old-k8s-version-529869 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc0009c9e60, 0x4f7fe00, 0xc000052018, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc0001acc00, 0xc00023f180, 0xc0009c45a0, 0xc00099b060, 0xc0009c23bc, 0xc00099b070, 0xc0009be840)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: goroutine 154 [select]:
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc0001ad080, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc0001ad080, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001ad080, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x24, 0x0, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net.(*Dialer).DialContext(0xc0001715c0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000751960, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x24, 0x60, 0x7f76dffb82a8, 0x118, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net/http.(*Transport).dial(0xc000a6f040, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc00099c9c0, 0x24, 0x0, 0x0, 0x4fec640, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net/http.(*Transport).dialConn(0xc000a6f040, 0x4f7fe00, 0xc000052030, 0x0, 0xc0009be900, 0x5, 0xc00099c9c0, 0x24, 0x0, 0xc000998c60, ...)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: net/http.(*Transport).dialConnFor(0xc000a6f040, 0xc00091def0)
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]: created by net/http.(*Transport).queueForDial
	Apr 14 15:43:01 old-k8s-version-529869 kubelet[5463]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (336.881988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-529869" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (512.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-708005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 15:38:18.359193 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p newest-cni-708005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (25.168718623s)

                                                
                                                
-- stdout --
	* [newest-cni-708005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "newest-cni-708005" primary control-plane node in "newest-cni-708005" cluster
	* Restarting existing kvm2 VM for "newest-cni-708005" ...
	* Updating the running kvm2 "newest-cni-708005" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:37:54.447682 1899966 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:37:54.447897 1899966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:37:54.447914 1899966 out.go:358] Setting ErrFile to fd 2...
	I0414 15:37:54.447919 1899966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:37:54.448104 1899966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:37:54.448680 1899966 out.go:352] Setting JSON to false
	I0414 15:37:54.449725 1899966 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40818,"bootTime":1744604256,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:37:54.449851 1899966 start.go:139] virtualization: kvm guest
	I0414 15:37:54.451961 1899966 out.go:177] * [newest-cni-708005] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:37:54.453267 1899966 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:37:54.453290 1899966 notify.go:220] Checking for updates...
	I0414 15:37:54.455764 1899966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:37:54.457175 1899966 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:37:54.458400 1899966 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:37:54.459668 1899966 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:37:54.461024 1899966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:37:54.462837 1899966 config.go:182] Loaded profile config "newest-cni-708005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:37:54.463270 1899966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:37:54.463357 1899966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:37:54.479372 1899966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0414 15:37:54.479953 1899966 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:37:54.480545 1899966 main.go:141] libmachine: Using API Version  1
	I0414 15:37:54.480570 1899966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:37:54.480995 1899966 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:37:54.481182 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:37:54.481390 1899966 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:37:54.481802 1899966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:37:54.481863 1899966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:37:54.498026 1899966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0414 15:37:54.498481 1899966 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:37:54.498949 1899966 main.go:141] libmachine: Using API Version  1
	I0414 15:37:54.498969 1899966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:37:54.499308 1899966 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:37:54.499490 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:37:54.539361 1899966 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 15:37:54.540645 1899966 start.go:297] selected driver: kvm2
	I0414 15:37:54.540661 1899966 start.go:901] validating driver "kvm2" against &{Name:newest-cni-708005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:newest-cni-708005 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:37:54.540792 1899966 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:37:54.541593 1899966 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:37:54.541694 1899966 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:37:54.558977 1899966 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:37:54.559490 1899966 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0414 15:37:54.559537 1899966 cni.go:84] Creating CNI manager for ""
	I0414 15:37:54.559599 1899966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 15:37:54.559657 1899966 start.go:340] cluster config:
	{Name:newest-cni-708005 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-708005 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.41 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:37:54.559810 1899966 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:37:54.561635 1899966 out.go:177] * Starting "newest-cni-708005" primary control-plane node in "newest-cni-708005" cluster
	I0414 15:37:54.563023 1899966 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:37:54.563080 1899966 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 15:37:54.563092 1899966 cache.go:56] Caching tarball of preloaded images
	I0414 15:37:54.563183 1899966 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:37:54.563196 1899966 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 15:37:54.563353 1899966 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/newest-cni-708005/config.json ...
	I0414 15:37:54.563593 1899966 start.go:360] acquireMachinesLock for newest-cni-708005: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:37:54.563668 1899966 start.go:364] duration metric: took 51.691µs to acquireMachinesLock for "newest-cni-708005"
	I0414 15:37:54.563692 1899966 start.go:96] Skipping create...Using existing machine configuration
	I0414 15:37:54.563699 1899966 fix.go:54] fixHost starting: 
	I0414 15:37:54.564024 1899966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:37:54.564063 1899966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:37:54.580200 1899966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0414 15:37:54.580655 1899966 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:37:54.581120 1899966 main.go:141] libmachine: Using API Version  1
	I0414 15:37:54.581141 1899966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:37:54.581536 1899966 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:37:54.581747 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:37:54.581912 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetState
	I0414 15:37:54.583840 1899966 fix.go:112] recreateIfNeeded on newest-cni-708005: state=Stopped err=<nil>
	I0414 15:37:54.583869 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	W0414 15:37:54.584036 1899966 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 15:37:54.586432 1899966 out.go:177] * Restarting existing kvm2 VM for "newest-cni-708005" ...
	I0414 15:37:54.587571 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .Start
	I0414 15:37:54.587784 1899966 main.go:141] libmachine: (newest-cni-708005) starting domain...
	I0414 15:37:54.587807 1899966 main.go:141] libmachine: (newest-cni-708005) ensuring networks are active...
	I0414 15:37:54.588746 1899966 main.go:141] libmachine: (newest-cni-708005) Ensuring network default is active
	I0414 15:37:54.589128 1899966 main.go:141] libmachine: (newest-cni-708005) Ensuring network mk-newest-cni-708005 is active
	I0414 15:37:54.589575 1899966 main.go:141] libmachine: (newest-cni-708005) getting domain XML...
	I0414 15:37:54.590546 1899966 main.go:141] libmachine: (newest-cni-708005) creating domain...
	I0414 15:37:54.948664 1899966 main.go:141] libmachine: (newest-cni-708005) waiting for IP...
	I0414 15:37:54.949787 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:54.950257 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:54.950342 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:54.950246 1900002 retry.go:31] will retry after 196.193729ms: waiting for domain to come up
	I0414 15:37:55.148473 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:55.149105 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:55.149142 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:55.149090 1900002 retry.go:31] will retry after 342.960725ms: waiting for domain to come up
	I0414 15:37:55.494008 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:55.494657 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:55.494701 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:55.494644 1900002 retry.go:31] will retry after 305.327035ms: waiting for domain to come up
	I0414 15:37:55.801997 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:55.802517 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:55.802547 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:55.802475 1900002 retry.go:31] will retry after 476.548089ms: waiting for domain to come up
	I0414 15:37:56.281272 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:56.281714 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:56.281788 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:56.281699 1900002 retry.go:31] will retry after 721.660125ms: waiting for domain to come up
	I0414 15:37:57.004774 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:57.005361 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:57.005399 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:57.005320 1900002 retry.go:31] will retry after 786.998017ms: waiting for domain to come up
	I0414 15:37:57.794270 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:57.794846 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:57.794875 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:57.794789 1900002 retry.go:31] will retry after 1.121498826s: waiting for domain to come up
	I0414 15:37:58.918336 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:37:58.919016 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:37:58.919075 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:37:58.919002 1900002 retry.go:31] will retry after 1.401718085s: waiting for domain to come up
	I0414 15:38:00.323125 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:00.323719 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:38:00.323766 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:38:00.323700 1900002 retry.go:31] will retry after 1.146913176s: waiting for domain to come up
	I0414 15:38:01.472019 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:01.472606 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:38:01.472639 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:38:01.472539 1900002 retry.go:31] will retry after 1.957870565s: waiting for domain to come up
	I0414 15:38:03.432814 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:03.433374 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:38:03.433407 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:38:03.433337 1900002 retry.go:31] will retry after 2.649435912s: waiting for domain to come up
	I0414 15:38:06.084236 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:06.084900 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:38:06.084957 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:38:06.084870 1900002 retry.go:31] will retry after 2.299087842s: waiting for domain to come up
	I0414 15:38:08.385240 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:08.385784 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | unable to find current IP address of domain newest-cni-708005 in network mk-newest-cni-708005
	I0414 15:38:08.385814 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | I0414 15:38:08.385718 1900002 retry.go:31] will retry after 3.894547517s: waiting for domain to come up
	I0414 15:38:12.283672 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.284305 1899966 main.go:141] libmachine: (newest-cni-708005) found domain IP: 192.168.61.41
	I0414 15:38:12.284333 1899966 main.go:141] libmachine: (newest-cni-708005) reserving static IP address...
	I0414 15:38:12.284368 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has current primary IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.284946 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "newest-cni-708005", mac: "52:54:00:33:9f:85", ip: "192.168.61.41"} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.284976 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | skip adding static IP to network mk-newest-cni-708005 - found existing host DHCP lease matching {name: "newest-cni-708005", mac: "52:54:00:33:9f:85", ip: "192.168.61.41"}
	I0414 15:38:12.285003 1899966 main.go:141] libmachine: (newest-cni-708005) reserved static IP address 192.168.61.41 for domain newest-cni-708005
	I0414 15:38:12.285023 1899966 main.go:141] libmachine: (newest-cni-708005) waiting for SSH...
	I0414 15:38:12.285034 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | Getting to WaitForSSH function...
	I0414 15:38:12.287623 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.287993 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.288034 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.288185 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | Using SSH client type: external
	I0414 15:38:12.288213 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/newest-cni-708005/id_rsa (-rw-------)
	I0414 15:38:12.288258 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/newest-cni-708005/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:38:12.288285 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | About to run SSH command:
	I0414 15:38:12.288298 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | exit 0
	I0414 15:38:12.419053 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | SSH cmd err, output: <nil>: 
	I0414 15:38:12.419411 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetConfigRaw
	I0414 15:38:12.420091 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetIP
	I0414 15:38:12.422697 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.423075 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.423104 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.423373 1899966 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/newest-cni-708005/config.json ...
	I0414 15:38:12.423647 1899966 machine.go:93] provisionDockerMachine start ...
	I0414 15:38:12.423669 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:38:12.423915 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:12.426521 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.426857 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.426887 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.427071 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:12.427264 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.427438 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.427675 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:12.427898 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:12.428197 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:12.428212 1899966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 15:38:12.543668 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0414 15:38:12.543703 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:12.543969 1899966 buildroot.go:166] provisioning hostname "newest-cni-708005"
	I0414 15:38:12.543999 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:12.544244 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:12.547351 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.547771 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.547811 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.547896 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:12.548091 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.548243 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.548432 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:12.548626 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:12.548914 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:12.548934 1899966 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708005 && echo "newest-cni-708005" | sudo tee /etc/hostname
	I0414 15:38:12.678481 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708005
	
	I0414 15:38:12.678548 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:12.681537 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.681931 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.681958 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.682155 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:12.682391 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.682612 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:12.682798 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:12.683007 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:12.683207 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:12.683222 1899966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708005/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:38:12.808629 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:38:12.808665 1899966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:38:12.808686 1899966 buildroot.go:174] setting up certificates
	I0414 15:38:12.808697 1899966 provision.go:84] configureAuth start
	I0414 15:38:12.808706 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:12.809066 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetIP
	I0414 15:38:12.811976 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.812273 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.812302 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.812505 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:12.814827 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.815147 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:12.815192 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:12.815303 1899966 provision.go:143] copyHostCerts
	I0414 15:38:12.815363 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:38:12.815373 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:38:12.815445 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:38:12.815569 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:38:12.815580 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:38:12.815623 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:38:12.815704 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:38:12.815714 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:38:12.815749 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:38:12.815831 1899966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708005 san=[127.0.0.1 192.168.61.41 localhost minikube newest-cni-708005]
	I0414 15:38:13.047119 1899966 provision.go:177] copyRemoteCerts
	I0414 15:38:13.047206 1899966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:38:13.047238 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:13.050240 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:13.050666 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:13.050693 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:13.050897 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:13.051125 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:13.051301 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:13.051488 1899966 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/newest-cni-708005/id_rsa Username:docker}
	I0414 15:38:13.141662 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:38:13.168224 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 15:38:13.194579 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:38:13.220509 1899966 provision.go:87] duration metric: took 411.794675ms to configureAuth
	I0414 15:38:13.220549 1899966 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:38:13.220750 1899966 config.go:182] Loaded profile config "newest-cni-708005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:38:13.220824 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:13.223857 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:13.224249 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:13.224290 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:13.224465 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:13.224716 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:13.225003 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:13.225207 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:13.225440 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:13.225704 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:13.225733 1899966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:38:13.413908 1899966 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I0414 15:38:13.413964 1899966 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	I0414 15:38:13.413974 1899966 machine.go:96] duration metric: took 990.313198ms to provisionDockerMachine
	I0414 15:38:13.414005 1899966 fix.go:56] duration metric: took 18.850307722s for fixHost
	I0414 15:38:13.414012 1899966 start.go:83] releasing machines lock for "newest-cni-708005", held for 18.850330898s
	W0414 15:38:13.414027 1899966 start.go:714] error starting host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	W0414 15:38:13.414292 1899966 out.go:270] ! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I0414 15:38:13.414307 1899966 start.go:729] Will try again in 5 seconds ...
	I0414 15:38:18.415153 1899966 start.go:360] acquireMachinesLock for newest-cni-708005: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:38:18.415321 1899966 start.go:364] duration metric: took 69.747µs to acquireMachinesLock for "newest-cni-708005"
	I0414 15:38:18.415354 1899966 start.go:96] Skipping create...Using existing machine configuration
	I0414 15:38:18.415363 1899966 fix.go:54] fixHost starting: 
	I0414 15:38:18.415687 1899966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:38:18.415738 1899966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:38:18.434236 1899966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0414 15:38:18.434842 1899966 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:38:18.435569 1899966 main.go:141] libmachine: Using API Version  1
	I0414 15:38:18.435601 1899966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:38:18.435985 1899966 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:38:18.436230 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:38:18.436383 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetState
	I0414 15:38:18.438205 1899966 fix.go:112] recreateIfNeeded on newest-cni-708005: state=Running err=<nil>
	W0414 15:38:18.438223 1899966 fix.go:138] unexpected machine state, will restart: <nil>
	I0414 15:38:18.440678 1899966 out.go:177] * Updating the running kvm2 "newest-cni-708005" VM ...
	I0414 15:38:18.442302 1899966 machine.go:93] provisionDockerMachine start ...
	I0414 15:38:18.442360 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:38:18.442717 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:18.446040 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.446611 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:18.446638 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.446853 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:18.447088 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.447265 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.447404 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:18.447620 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:18.447949 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:18.447964 1899966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 15:38:18.571301 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708005
	
	I0414 15:38:18.571344 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:18.571629 1899966 buildroot.go:166] provisioning hostname "newest-cni-708005"
	I0414 15:38:18.571654 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:18.571853 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:18.575034 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.575451 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:18.575476 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.575713 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:18.575923 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.576118 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.576275 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:18.576471 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:18.576702 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:18.576715 1899966 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708005 && echo "newest-cni-708005" | sudo tee /etc/hostname
	I0414 15:38:18.705717 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708005
	
	I0414 15:38:18.705762 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:18.709276 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.709688 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:18.709716 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.709961 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:18.710163 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.710349 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:18.710516 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:18.710663 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:18.710917 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:18.710935 1899966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708005' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708005/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708005' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:38:18.831881 1899966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:38:18.831924 1899966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:38:18.831952 1899966 buildroot.go:174] setting up certificates
	I0414 15:38:18.831964 1899966 provision.go:84] configureAuth start
	I0414 15:38:18.831976 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetMachineName
	I0414 15:38:18.832322 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetIP
	I0414 15:38:18.835292 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.835639 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:18.835670 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.835859 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:18.838380 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.838730 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:18.838758 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:18.838893 1899966 provision.go:143] copyHostCerts
	I0414 15:38:18.838975 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:38:18.838991 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:38:18.839086 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:38:18.839254 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:38:18.839269 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:38:18.839313 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:38:18.839429 1899966 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:38:18.839441 1899966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:38:18.839477 1899966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:38:18.839587 1899966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708005 san=[127.0.0.1 192.168.61.41 localhost minikube newest-cni-708005]
	I0414 15:38:19.187145 1899966 provision.go:177] copyRemoteCerts
	I0414 15:38:19.187211 1899966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:38:19.187249 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:19.190139 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:19.190561 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:19.190594 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:19.190773 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:19.191015 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:19.191161 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:19.191304 1899966 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/newest-cni-708005/id_rsa Username:docker}
	I0414 15:38:19.278286 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:38:19.306513 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0414 15:38:19.333887 1899966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:38:19.363634 1899966 provision.go:87] duration metric: took 531.653093ms to configureAuth
	I0414 15:38:19.363671 1899966 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:38:19.363856 1899966 config.go:182] Loaded profile config "newest-cni-708005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:38:19.363932 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:19.366794 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:19.367177 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:19.367214 1899966 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:19.367422 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:19.367654 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:19.367823 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:19.367972 1899966 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:19.368172 1899966 main.go:141] libmachine: Using SSH client type: native
	I0414 15:38:19.368399 1899966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.41 22 <nil> <nil>}
	I0414 15:38:19.368445 1899966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:38:19.559751 1899966 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I0414 15:38:19.559786 1899966 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	I0414 15:38:19.559807 1899966 machine.go:96] duration metric: took 1.117480903s to provisionDockerMachine
	I0414 15:38:19.559842 1899966 fix.go:56] duration metric: took 1.144481104s for fixHost
	I0414 15:38:19.559852 1899966 start.go:83] releasing machines lock for "newest-cni-708005", held for 1.1445162s
	W0414 15:38:19.559941 1899966 out.go:270] * Failed to start kvm2 VM. Running "minikube delete -p newest-cni-708005" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p newest-cni-708005" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	I0414 15:38:19.561748 1899966 out.go:201] 
	W0414 15:38:19.563140 1899966 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	Job for crio.service failed because the control process exited with error code.
	See "systemctl status crio.service" and "journalctl -xeu crio.service" for details.
	
	W0414 15:38:19.563164 1899966 out.go:270] * 
	* 
	W0414 15:38:19.564382 1899966 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:38:19.565224 1899966 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p newest-cni-708005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005: exit status 6 (238.507697ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:38:19.803051 1900133 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-708005" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-708005" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (25.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-708005 image list --format=json
start_stop_delete_test.go:302: v1.32.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.16-0",
- 	"registry.k8s.io/kube-apiserver:v1.32.2",
- 	"registry.k8s.io/kube-controller-manager:v1.32.2",
- 	"registry.k8s.io/kube-proxy:v1.32.2",
- 	"registry.k8s.io/kube-scheduler:v1.32.2",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005: exit status 6 (239.991292ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:38:20.271955 1900187 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-708005" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-708005" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-708005 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-708005 --alsologtostderr -v=1: exit status 80 (1.677085873s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-708005 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:38:20.336274 1900217 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:38:20.336435 1900217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:38:20.336475 1900217 out.go:358] Setting ErrFile to fd 2...
	I0414 15:38:20.336482 1900217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:38:20.336890 1900217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:38:20.337149 1900217 out.go:352] Setting JSON to false
	I0414 15:38:20.337193 1900217 mustload.go:65] Loading cluster: newest-cni-708005
	I0414 15:38:20.337545 1900217 config.go:182] Loaded profile config "newest-cni-708005": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:38:20.337877 1900217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:38:20.337929 1900217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:38:20.354172 1900217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0414 15:38:20.354692 1900217 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:38:20.355303 1900217 main.go:141] libmachine: Using API Version  1
	I0414 15:38:20.355332 1900217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:38:20.355698 1900217 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:38:20.355905 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .GetState
	I0414 15:38:20.357672 1900217 host.go:66] Checking if "newest-cni-708005" exists ...
	I0414 15:38:20.357966 1900217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:38:20.358010 1900217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:38:20.373636 1900217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46531
	I0414 15:38:20.374118 1900217 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:38:20.374661 1900217 main.go:141] libmachine: Using API Version  1
	I0414 15:38:20.374690 1900217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:38:20.375056 1900217 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:38:20.375241 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:38:20.375940 1900217 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.35.0/minikube-v1.35.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.35.0-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mou
nt-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-708005 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0414 15:38:20.378088 1900217 out.go:177] * Pausing node newest-cni-708005 ... 
	I0414 15:38:20.379239 1900217 host.go:66] Checking if "newest-cni-708005" exists ...
	I0414 15:38:20.379676 1900217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:38:20.379723 1900217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:38:20.396123 1900217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
	I0414 15:38:20.396571 1900217 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:38:20.397013 1900217 main.go:141] libmachine: Using API Version  1
	I0414 15:38:20.397039 1900217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:38:20.397372 1900217 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:38:20.397563 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .DriverName
	I0414 15:38:20.397756 1900217 ssh_runner.go:195] Run: systemctl --version
	I0414 15:38:20.397779 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHHostname
	I0414 15:38:20.400733 1900217 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:20.401146 1900217 main.go:141] libmachine: (newest-cni-708005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:9f:85", ip: ""} in network mk-newest-cni-708005: {Iface:virbr3 ExpiryTime:2025-04-14 16:38:05 +0000 UTC Type:0 Mac:52:54:00:33:9f:85 Iaid: IPaddr:192.168.61.41 Prefix:24 Hostname:newest-cni-708005 Clientid:01:52:54:00:33:9f:85}
	I0414 15:38:20.401180 1900217 main.go:141] libmachine: (newest-cni-708005) DBG | domain newest-cni-708005 has defined IP address 192.168.61.41 and MAC address 52:54:00:33:9f:85 in network mk-newest-cni-708005
	I0414 15:38:20.401302 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHPort
	I0414 15:38:20.401474 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHKeyPath
	I0414 15:38:20.401626 1900217 main.go:141] libmachine: (newest-cni-708005) Calling .GetSSHUsername
	I0414 15:38:20.401784 1900217 sshutil.go:53] new ssh client: &{IP:192.168.61.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/newest-cni-708005/id_rsa Username:docker}
	I0414 15:38:20.485505 1900217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:38:20.501232 1900217 pause.go:51] kubelet running: false
	I0414 15:38:20.501342 1900217 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0414 15:38:20.517244 1900217 retry.go:31] will retry after 259.83483ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0414 15:38:20.777822 1900217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:38:20.794665 1900217 pause.go:51] kubelet running: false
	I0414 15:38:20.794748 1900217 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0414 15:38:20.809244 1900217 retry.go:31] will retry after 430.405526ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0414 15:38:21.240726 1900217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:38:21.255741 1900217 pause.go:51] kubelet running: false
	I0414 15:38:21.255824 1900217 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0414 15:38:21.270494 1900217 retry.go:31] will retry after 648.316164ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0414 15:38:21.919342 1900217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:38:21.935028 1900217 pause.go:51] kubelet running: false
	I0414 15:38:21.935095 1900217 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0414 15:38:21.952536 1900217 out.go:201] 
	W0414 15:38:21.954168 1900217 out.go:270] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0414 15:38:21.954191 1900217 out.go:270] * 
	* 
	W0414 15:38:21.960088 1900217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_1.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:38:21.961989 1900217 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-708005 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005: exit status 6 (233.66899ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:38:22.184043 1900264 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-708005" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-708005" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005: exit status 6 (233.185803ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 15:38:22.416611 1900294 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-708005" does not appear in /home/jenkins/minikube-integration/20512-1845971/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "newest-cni-708005" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:43:07.360865 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:43:18.359522 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:43:48.323168 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:31.482533 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:36.965211 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:41.434318 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:48.843963 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:48.850400 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:48.861796 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:48.883308 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:48.924848 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:49.006356 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:49.168203 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:49.489715 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:44:50.131842 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:51.413300 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:53.974708 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:44:59.097048 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:04.667469 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:09.338431 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:10.244957 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:29.820528 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:41.593844 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.600358 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.611757 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.633251 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.674767 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.756283 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:41.917903 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:45:42.239724 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:42.881997 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:44.163498 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:46.724946 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:45:51.846978 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:02.088671 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:10.781944 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:22.570606 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:33.993886 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.000288 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.011706 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.033094 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.074524 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.156006 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:34.317342 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:34.639062 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:35.281194 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:36.562634 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:39.124533 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:44.246000 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:50.205913 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.212320 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.223700 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.245119 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.286577 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:50.368891 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.530477 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:46:50.852227 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:51.494341 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:52.775876 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:54.487728 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:46:55.337810 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:00.459834 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:03.532151 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:10.701351 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:14.969878 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:26.382993 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:31.183431 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:32.704251 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:34.572275 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:47.031518 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.038006 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.049484 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.070876 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.112367 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.193927 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:47.355221 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:47:47.677008 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:48.319071 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:49.601382 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:52.162960 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:54.086430 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:55.931235 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:47:57.284687 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:07.526212 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:12.145227 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:18.359279 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.642112 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.648483 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.659942 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.681421 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.722899 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.804474 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:18.966341 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:19.288107 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:19.930014 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:21.211890 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:23.773388 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:25.453452 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:28.007721 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:28.895671 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:39.137759 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:57.791709 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:57.798095 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:57.809485 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:57.830852 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:57.872306 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:57.953832 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:58.115437 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:58.437137 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:48:59.078610 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:48:59.619206 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:00.360558 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:02.922027 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:08.043900 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:08.969873 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:17.853619 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:49:18.285473 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:31.482144 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:34.067151 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:36.965200 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:38.766940 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:40.581135 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:49:48.844591 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:50:16.545670 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:50:19.728226 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:50:30.891978 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:50:41.593863 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:51:02.502736 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:51:09.295079 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:51:33.993855 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:51:41.650057 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:51:50.205940 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:52:01.695361 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (250.586719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-529869" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (251.39259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-529869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-036922 sudo iptables                       | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo docker                         | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo find                           | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo crio                           | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-036922                                     | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 15:42:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 15:42:20.393428 1908903 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:42:20.393707 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.393717 1908903 out.go:358] Setting ErrFile to fd 2...
	I0414 15:42:20.393721 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.394014 1908903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:42:20.394737 1908903 out.go:352] Setting JSON to false
	I0414 15:42:20.396002 1908903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":41084,"bootTime":1744604256,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:42:20.396077 1908903 start.go:139] virtualization: kvm guest
	I0414 15:42:20.398284 1908903 out.go:177] * [bridge-036922] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:42:20.399747 1908903 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:42:20.399774 1908903 notify.go:220] Checking for updates...
	I0414 15:42:20.402506 1908903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:42:20.403700 1908903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:20.404951 1908903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:20.406045 1908903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:42:20.407237 1908903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:42:20.408819 1908903 config.go:182] Loaded profile config "enable-default-cni-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.408920 1908903 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.409003 1908903 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:42:20.409078 1908903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:42:20.449900 1908903 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:42:20.451424 1908903 start.go:297] selected driver: kvm2
	I0414 15:42:20.451445 1908903 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:42:20.451460 1908903 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:42:20.452406 1908903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.452490 1908903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:42:20.470925 1908903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:42:20.470988 1908903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 15:42:20.471237 1908903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:20.471280 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:42:20.471289 1908903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 15:42:20.471347 1908903 start.go:340] cluster config:
	{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:20.471467 1908903 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.473355 1908903 out.go:177] * Starting "bridge-036922" primary control-plane node in "bridge-036922" cluster
	I0414 15:42:18.311367 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:18.311873 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:18.311907 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:18.311830 1907444 retry.go:31] will retry after 1.961785823s: waiting for domain to come up
	I0414 15:42:20.275622 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:20.276217 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:20.276245 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:20.276160 1907444 retry.go:31] will retry after 3.443279587s: waiting for domain to come up
	I0414 15:42:18.552316 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:21.052659 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:20.474918 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:20.474969 1908903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 15:42:20.474980 1908903 cache.go:56] Caching tarball of preloaded images
	I0414 15:42:20.475087 1908903 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:42:20.475100 1908903 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 15:42:20.475200 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:20.475219 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json: {Name:mk46811239729f3d2abef41cf6cd2fb6300eacaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:20.475365 1908903 start.go:360] acquireMachinesLock for bridge-036922: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:42:23.721372 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:23.721981 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:23.722015 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:23.721948 1907444 retry.go:31] will retry after 3.812874947s: waiting for domain to come up
	I0414 15:42:27.536454 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:27.537033 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:27.537056 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:27.537004 1907444 retry.go:31] will retry after 3.540212628s: waiting for domain to come up
	I0414 15:42:23.551530 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:25.552074 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:28.051484 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.627768 1908903 start.go:364] duration metric: took 12.152363514s to acquireMachinesLock for "bridge-036922"
	I0414 15:42:32.627850 1908903 start.go:93] Provisioning new machine with config: &{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:32.627970 1908903 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 15:42:31.081114 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081620 1907421 main.go:141] libmachine: (flannel-036922) found domain IP: 192.168.72.200
	I0414 15:42:31.081647 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has current primary IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081654 1907421 main.go:141] libmachine: (flannel-036922) reserving static IP address...
	I0414 15:42:31.082097 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find host DHCP lease matching {name: "flannel-036922", mac: "52:54:00:47:a6:f3", ip: "192.168.72.200"} in network mk-flannel-036922
	I0414 15:42:31.169991 1907421 main.go:141] libmachine: (flannel-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:31.170026 1907421 main.go:141] libmachine: (flannel-036922) reserved static IP address 192.168.72.200 for domain flannel-036922
	I0414 15:42:31.170038 1907421 main.go:141] libmachine: (flannel-036922) waiting for SSH...
	I0414 15:42:31.173332 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173746 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.173785 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173994 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH client type: external
	I0414 15:42:31.174024 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa (-rw-------)
	I0414 15:42:31.174056 1907421 main.go:141] libmachine: (flannel-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:31.174071 1907421 main.go:141] libmachine: (flannel-036922) DBG | About to run SSH command:
	I0414 15:42:31.174081 1907421 main.go:141] libmachine: (flannel-036922) DBG | exit 0
	I0414 15:42:31.299043 1907421 main.go:141] libmachine: (flannel-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:31.299375 1907421 main.go:141] libmachine: (flannel-036922) KVM machine creation complete
	I0414 15:42:31.299910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:31.300482 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300707 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300937 1907421 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:31.300956 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:31.302412 1907421 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:31.302427 1907421 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:31.302432 1907421 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:31.302437 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.305226 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305622 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.305653 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305832 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.306067 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306262 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306413 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.306582 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.306835 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.306848 1907421 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:31.409981 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.410015 1907421 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:31.410027 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.412803 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413105 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.413155 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413279 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.413504 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413690 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413892 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.414073 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.414440 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.414462 1907421 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:31.519809 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:31.519916 1907421 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:31.519927 1907421 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:31.519936 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520223 1907421 buildroot.go:166] provisioning hostname "flannel-036922"
	I0414 15:42:31.520239 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520436 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.523093 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523484 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.523524 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523722 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.523907 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524062 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524183 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.524321 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.524614 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.524632 1907421 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-036922 && echo "flannel-036922" | sudo tee /etc/hostname
	I0414 15:42:31.645537 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-036922
	
	I0414 15:42:31.645576 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.648224 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648558 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.648593 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648747 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.648942 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649094 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649255 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.649473 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.649681 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.649696 1907421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:31.764596 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.764638 1907421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:31.764666 1907421 buildroot.go:174] setting up certificates
	I0414 15:42:31.764679 1907421 provision.go:84] configureAuth start
	I0414 15:42:31.764694 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.765045 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:31.768031 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768340 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.768368 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768520 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.770840 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771160 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.771189 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771328 1907421 provision.go:143] copyHostCerts
	I0414 15:42:31.771404 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:31.771416 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:31.771486 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:31.771610 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:31.771619 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:31.771644 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:31.771710 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:31.771717 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:31.771741 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:31.771791 1907421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.flannel-036922 san=[127.0.0.1 192.168.72.200 flannel-036922 localhost minikube]
	I0414 15:42:31.968023 1907421 provision.go:177] copyRemoteCerts
	I0414 15:42:31.968092 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:31.968117 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.970932 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971208 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.971239 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971419 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.971624 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.971760 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.971949 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.059121 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:32.086750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 15:42:32.113750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:32.140600 1907421 provision.go:87] duration metric: took 375.905384ms to configureAuth
	I0414 15:42:32.140649 1907421 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:32.140825 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:32.140910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.143669 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144072 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.144098 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144301 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.144503 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144664 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144839 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.145044 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.145348 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.145371 1907421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:32.376226 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:32.376251 1907421 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:32.376267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetURL
	I0414 15:42:32.377737 1907421 main.go:141] libmachine: (flannel-036922) DBG | using libvirt version 6000000
	I0414 15:42:32.380146 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380479 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.380510 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380661 1907421 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:32.380675 1907421 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:32.380683 1907421 client.go:171] duration metric: took 24.152526095s to LocalClient.Create
	I0414 15:42:32.380708 1907421 start.go:167] duration metric: took 24.152593581s to libmachine.API.Create "flannel-036922"
	I0414 15:42:32.380736 1907421 start.go:293] postStartSetup for "flannel-036922" (driver="kvm2")
	I0414 15:42:32.380753 1907421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:32.380784 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.381034 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:32.381060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.383436 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383744 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.383765 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.384128 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.384303 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.384449 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.469641 1907421 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:32.474716 1907421 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:32.474754 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:32.474843 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:32.474963 1907421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:32.475080 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:32.485571 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:32.513908 1907421 start.go:296] duration metric: took 133.150087ms for postStartSetup
	I0414 15:42:32.513976 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:32.514671 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.517434 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.517794 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.517830 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.518116 1907421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/config.json ...
	I0414 15:42:32.518321 1907421 start.go:128] duration metric: took 24.310122388s to createHost
	I0414 15:42:32.518346 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.520587 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.520903 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.520939 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.521138 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.521368 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521508 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521672 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.521818 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.522073 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.522085 1907421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:32.627543 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645352.607238172
	
	I0414 15:42:32.627581 1907421 fix.go:216] guest clock: 1744645352.607238172
	I0414 15:42:32.627603 1907421 fix.go:229] Guest: 2025-04-14 15:42:32.607238172 +0000 UTC Remote: 2025-04-14 15:42:32.518333951 +0000 UTC m=+24.431599100 (delta=88.904221ms)
	I0414 15:42:32.627642 1907421 fix.go:200] guest clock delta is within tolerance: 88.904221ms
	I0414 15:42:32.627654 1907421 start.go:83] releasing machines lock for "flannel-036922", held for 24.419524725s
	I0414 15:42:32.627691 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.628088 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.631249 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.631790 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.631818 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.632042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.632785 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633151 1907421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:32.633227 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.633252 1907421 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:32.633267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.636525 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636562 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636948 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.636985 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.637010 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637085 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637238 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637465 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637483 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637697 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637723 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.637882 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.637900 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.638077 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.717463 1907421 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:32.745427 1907421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:32.909851 1907421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:32.916503 1907421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:32.916578 1907421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:32.933971 1907421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:32.933995 1907421 start.go:495] detecting cgroup driver to use...
	I0414 15:42:32.934071 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:32.952308 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:32.970781 1907421 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:32.970865 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:32.987714 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:33.006216 1907421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:30.551892 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.552139 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:33.157399 1907421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:33.324202 1907421 docker.go:233] disabling docker service ...
	I0414 15:42:33.324273 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:33.341314 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:33.357080 1907421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:33.549837 1907421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:33.699436 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:33.714710 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:33.738926 1907421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:33.739015 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.751493 1907421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:33.751594 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.764325 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.776597 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.789601 1907421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:33.802342 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.813914 1907421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.837591 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.849585 1907421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:33.862417 1907421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:33.862494 1907421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:33.879615 1907421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:33.891734 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:34.014337 1907421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:34.117483 1907421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:34.117570 1907421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:34.123036 1907421 start.go:563] Will wait 60s for crictl version
	I0414 15:42:34.123111 1907421 ssh_runner.go:195] Run: which crictl
	I0414 15:42:34.128066 1907421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:34.173872 1907421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:34.173955 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.210232 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.246653 1907421 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:32.631413 1908903 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 15:42:32.631616 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:32.631698 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:32.649503 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0414 15:42:32.649969 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:32.650582 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:42:32.650606 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:32.651035 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:32.651256 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:32.651415 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:32.651580 1908903 start.go:159] libmachine.API.Create for "bridge-036922" (driver="kvm2")
	I0414 15:42:32.651640 1908903 client.go:168] LocalClient.Create starting
	I0414 15:42:32.651683 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 15:42:32.651736 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651761 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651848 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 15:42:32.651877 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651896 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651923 1908903 main.go:141] libmachine: Running pre-create checks...
	I0414 15:42:32.651944 1908903 main.go:141] libmachine: (bridge-036922) Calling .PreCreateCheck
	I0414 15:42:32.652284 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:32.652746 1908903 main.go:141] libmachine: Creating machine...
	I0414 15:42:32.652761 1908903 main.go:141] libmachine: (bridge-036922) Calling .Create
	I0414 15:42:32.652923 1908903 main.go:141] libmachine: (bridge-036922) creating KVM machine...
	I0414 15:42:32.652944 1908903 main.go:141] libmachine: (bridge-036922) creating network...
	I0414 15:42:32.654276 1908903 main.go:141] libmachine: (bridge-036922) DBG | found existing default KVM network
	I0414 15:42:32.655546 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.655372 1909012 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:fb:6f} reservation:<nil>}
	I0414 15:42:32.656280 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.656199 1909012 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:27:da} reservation:<nil>}
	I0414 15:42:32.657561 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.657462 1909012 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292ac0}
	I0414 15:42:32.657591 1908903 main.go:141] libmachine: (bridge-036922) DBG | created network xml: 
	I0414 15:42:32.657603 1908903 main.go:141] libmachine: (bridge-036922) DBG | <network>
	I0414 15:42:32.657610 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <name>mk-bridge-036922</name>
	I0414 15:42:32.657618 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <dns enable='no'/>
	I0414 15:42:32.657625 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657634 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 15:42:32.657644 1908903 main.go:141] libmachine: (bridge-036922) DBG |     <dhcp>
	I0414 15:42:32.657656 1908903 main.go:141] libmachine: (bridge-036922) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 15:42:32.657665 1908903 main.go:141] libmachine: (bridge-036922) DBG |     </dhcp>
	I0414 15:42:32.657673 1908903 main.go:141] libmachine: (bridge-036922) DBG |   </ip>
	I0414 15:42:32.657685 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657692 1908903 main.go:141] libmachine: (bridge-036922) DBG | </network>
	I0414 15:42:32.657700 1908903 main.go:141] libmachine: (bridge-036922) DBG | 
	I0414 15:42:32.663623 1908903 main.go:141] libmachine: (bridge-036922) DBG | trying to create private KVM network mk-bridge-036922 192.168.61.0/24...
	I0414 15:42:32.748953 1908903 main.go:141] libmachine: (bridge-036922) DBG | private KVM network mk-bridge-036922 192.168.61.0/24 created
	I0414 15:42:32.748994 1908903 main.go:141] libmachine: (bridge-036922) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:32.749036 1908903 main.go:141] libmachine: (bridge-036922) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 15:42:32.749186 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.748956 1909012 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:32.749224 1908903 main.go:141] libmachine: (bridge-036922) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 15:42:33.058633 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.058470 1909012 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa...
	I0414 15:42:33.132442 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132298 1909012 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk...
	I0414 15:42:33.132477 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing magic tar header
	I0414 15:42:33.132492 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing SSH key tar header
	I0414 15:42:33.132503 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132444 1909012 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:33.132598 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922
	I0414 15:42:33.132618 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 15:42:33.132632 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 (perms=drwx------)
	I0414 15:42:33.132653 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 15:42:33.132668 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 15:42:33.132681 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 15:42:33.132691 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 15:42:33.132708 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 15:42:33.132722 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:33.132731 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.132765 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 15:42:33.132797 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 15:42:33.132810 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins
	I0414 15:42:33.132825 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home
	I0414 15:42:33.132858 1908903 main.go:141] libmachine: (bridge-036922) DBG | skipping /home - not owner
	I0414 15:42:33.134361 1908903 main.go:141] libmachine: (bridge-036922) define libvirt domain using xml: 
	I0414 15:42:33.134417 1908903 main.go:141] libmachine: (bridge-036922) <domain type='kvm'>
	I0414 15:42:33.134428 1908903 main.go:141] libmachine: (bridge-036922)   <name>bridge-036922</name>
	I0414 15:42:33.134436 1908903 main.go:141] libmachine: (bridge-036922)   <memory unit='MiB'>3072</memory>
	I0414 15:42:33.134447 1908903 main.go:141] libmachine: (bridge-036922)   <vcpu>2</vcpu>
	I0414 15:42:33.134454 1908903 main.go:141] libmachine: (bridge-036922)   <features>
	I0414 15:42:33.134476 1908903 main.go:141] libmachine: (bridge-036922)     <acpi/>
	I0414 15:42:33.134491 1908903 main.go:141] libmachine: (bridge-036922)     <apic/>
	I0414 15:42:33.134498 1908903 main.go:141] libmachine: (bridge-036922)     <pae/>
	I0414 15:42:33.134503 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134515 1908903 main.go:141] libmachine: (bridge-036922)   </features>
	I0414 15:42:33.134526 1908903 main.go:141] libmachine: (bridge-036922)   <cpu mode='host-passthrough'>
	I0414 15:42:33.134533 1908903 main.go:141] libmachine: (bridge-036922)   
	I0414 15:42:33.134542 1908903 main.go:141] libmachine: (bridge-036922)   </cpu>
	I0414 15:42:33.134548 1908903 main.go:141] libmachine: (bridge-036922)   <os>
	I0414 15:42:33.134557 1908903 main.go:141] libmachine: (bridge-036922)     <type>hvm</type>
	I0414 15:42:33.134591 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='cdrom'/>
	I0414 15:42:33.134612 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='hd'/>
	I0414 15:42:33.134622 1908903 main.go:141] libmachine: (bridge-036922)     <bootmenu enable='no'/>
	I0414 15:42:33.134628 1908903 main.go:141] libmachine: (bridge-036922)   </os>
	I0414 15:42:33.134637 1908903 main.go:141] libmachine: (bridge-036922)   <devices>
	I0414 15:42:33.134649 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='cdrom'>
	I0414 15:42:33.134666 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/boot2docker.iso'/>
	I0414 15:42:33.134677 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hdc' bus='scsi'/>
	I0414 15:42:33.134686 1908903 main.go:141] libmachine: (bridge-036922)       <readonly/>
	I0414 15:42:33.134695 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134704 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='disk'>
	I0414 15:42:33.134716 1908903 main.go:141] libmachine: (bridge-036922)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 15:42:33.134734 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk'/>
	I0414 15:42:33.134745 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hda' bus='virtio'/>
	I0414 15:42:33.134753 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134763 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134772 1908903 main.go:141] libmachine: (bridge-036922)       <source network='mk-bridge-036922'/>
	I0414 15:42:33.134782 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134790 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134798 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134804 1908903 main.go:141] libmachine: (bridge-036922)       <source network='default'/>
	I0414 15:42:33.134810 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134823 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134831 1908903 main.go:141] libmachine: (bridge-036922)     <serial type='pty'>
	I0414 15:42:33.134841 1908903 main.go:141] libmachine: (bridge-036922)       <target port='0'/>
	I0414 15:42:33.134851 1908903 main.go:141] libmachine: (bridge-036922)     </serial>
	I0414 15:42:33.134860 1908903 main.go:141] libmachine: (bridge-036922)     <console type='pty'>
	I0414 15:42:33.134870 1908903 main.go:141] libmachine: (bridge-036922)       <target type='serial' port='0'/>
	I0414 15:42:33.134878 1908903 main.go:141] libmachine: (bridge-036922)     </console>
	I0414 15:42:33.134887 1908903 main.go:141] libmachine: (bridge-036922)     <rng model='virtio'>
	I0414 15:42:33.134893 1908903 main.go:141] libmachine: (bridge-036922)       <backend model='random'>/dev/random</backend>
	I0414 15:42:33.134901 1908903 main.go:141] libmachine: (bridge-036922)     </rng>
	I0414 15:42:33.134928 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134945 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134958 1908903 main.go:141] libmachine: (bridge-036922)   </devices>
	I0414 15:42:33.134967 1908903 main.go:141] libmachine: (bridge-036922) </domain>
	I0414 15:42:33.134981 1908903 main.go:141] libmachine: (bridge-036922) 
	I0414 15:42:33.139633 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:ce:30:4b in network default
	I0414 15:42:33.140227 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.140266 1908903 main.go:141] libmachine: (bridge-036922) starting domain...
	I0414 15:42:33.140279 1908903 main.go:141] libmachine: (bridge-036922) ensuring networks are active...
	I0414 15:42:33.140917 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network default is active
	I0414 15:42:33.141340 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network mk-bridge-036922 is active
	I0414 15:42:33.142027 1908903 main.go:141] libmachine: (bridge-036922) getting domain XML...
	I0414 15:42:33.143089 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.536114 1908903 main.go:141] libmachine: (bridge-036922) waiting for IP...
	I0414 15:42:33.536974 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.537437 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.537518 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.537440 1909012 retry.go:31] will retry after 243.753367ms: waiting for domain to come up
	I0414 15:42:33.783413 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.784074 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.784104 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.784044 1909012 retry.go:31] will retry after 339.050332ms: waiting for domain to come up
	I0414 15:42:34.124346 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.124819 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.124847 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.124793 1909012 retry.go:31] will retry after 477.978489ms: waiting for domain to come up
	I0414 15:42:34.604689 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.605405 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.605478 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.605396 1909012 retry.go:31] will retry after 606.717012ms: waiting for domain to come up
	I0414 15:42:35.214566 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.215302 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.215335 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.215304 1909012 retry.go:31] will retry after 585.677483ms: waiting for domain to come up
	I0414 15:42:34.248060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:34.251061 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251494 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:34.251536 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251790 1907421 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:34.257345 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:34.271269 1907421 kubeadm.go:883] updating cluster {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:34.271419 1907421 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:34.271491 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:34.310047 1907421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:34.310148 1907421 ssh_runner.go:195] Run: which lz4
	I0414 15:42:34.314914 1907421 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:34.319663 1907421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:34.319706 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:36.005122 1907421 crio.go:462] duration metric: took 1.690246926s to copy over tarball
	I0414 15:42:36.005231 1907421 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:42:34.553205 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:37.052635 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:38.486201 1907421 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.480920023s)
	I0414 15:42:38.486301 1907421 crio.go:469] duration metric: took 2.481131687s to extract the tarball
	I0414 15:42:38.486328 1907421 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:42:38.536845 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:38.588854 1907421 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 15:42:38.588889 1907421 cache_images.go:84] Images are preloaded, skipping loading
	I0414 15:42:38.588901 1907421 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0414 15:42:38.589066 1907421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-036922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 15:42:38.589161 1907421 ssh_runner.go:195] Run: crio config
	I0414 15:42:38.639561 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:38.639596 1907421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:42:38.639626 1907421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-036922 NodeName:flannel-036922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:42:38.639887 1907421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-036922"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:42:38.640037 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 15:42:38.651901 1907421 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:42:38.651997 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:42:38.662036 1907421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 15:42:38.680585 1907421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:42:38.698787 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 15:42:38.721640 1907421 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0414 15:42:38.726592 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:38.740768 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:38.899231 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:38.918385 1907421 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922 for IP: 192.168.72.200
	I0414 15:42:38.918418 1907421 certs.go:194] generating shared ca certs ...
	I0414 15:42:38.918437 1907421 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:38.918692 1907421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:42:38.918762 1907421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:42:38.918790 1907421 certs.go:256] generating profile certs ...
	I0414 15:42:38.918873 1907421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key
	I0414 15:42:38.918893 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt with IP's: []
	I0414 15:42:39.040105 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt ...
	I0414 15:42:39.040138 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: {Name:mk2541d497355f75330e1e8d45ca7c05c9151252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040344 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key ...
	I0414 15:42:39.040361 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key: {Name:mk380b7bf852abf1b8988acb006ad6fc4e37f4e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040469 1907421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca
	I0414 15:42:39.040487 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0414 15:42:39.250195 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca ...
	I0414 15:42:39.250233 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca: {Name:mkbe9b8905a248872f1e8ad1d846ab894bf1ccb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250430 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca ...
	I0414 15:42:39.250443 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca: {Name:mk00eed7dd27975a2c63b91d58b73bd49c86808b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250518 1907421 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt
	I0414 15:42:39.250615 1907421 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key
	I0414 15:42:39.250679 1907421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key
	I0414 15:42:39.250697 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt with IP's: []
	I0414 15:42:39.442422 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt ...
	I0414 15:42:39.442455 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt: {Name:mka0a36bc874e1164bc79c06b6893dbd73138c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442664 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key ...
	I0414 15:42:39.442682 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key: {Name:mkee6ef65a530aee53bdaac10b3fb60ee09dbe64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442891 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:42:39.442929 1907421 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:42:39.442940 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:42:39.442967 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:42:39.442990 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:42:39.443010 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:42:39.443051 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:39.443680 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:42:39.474252 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:42:39.504144 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:42:39.530953 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:42:39.560025 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 15:42:39.592232 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:42:39.640260 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:42:39.670285 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:42:39.698670 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:42:39.726986 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:42:39.754399 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:42:39.788251 1907421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:42:39.807950 1907421 ssh_runner.go:195] Run: openssl version
	I0414 15:42:39.814532 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:42:39.827541 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834201 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834285 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.841587 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:42:39.853993 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:42:39.879246 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884226 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884303 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.890625 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:42:39.903508 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:42:39.915981 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921299 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921368 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.927524 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:42:39.939848 1907421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:42:39.945029 1907421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:42:39.945115 1907421 kubeadm.go:392] StartCluster: {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:39.945228 1907421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:42:39.945336 1907421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:42:39.993625 1907421 cri.go:89] found id: ""
	I0414 15:42:39.993726 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:42:40.007930 1907421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:42:40.022297 1907421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:42:40.033983 1907421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:42:40.034008 1907421 kubeadm.go:157] found existing configuration files:
	
	I0414 15:42:40.034060 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:42:40.044411 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:42:40.044493 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:42:40.057768 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:42:40.068947 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:42:40.069049 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:42:40.080075 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.090907 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:42:40.090972 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.102034 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:42:40.113045 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:42:40.113105 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:42:40.123704 1907421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:42:40.185411 1907421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 15:42:40.185554 1907421 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:42:40.312075 1907421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:42:40.312258 1907421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:42:40.312435 1907421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 15:42:40.324898 1907421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:42:35.802698 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.803793 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.803828 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.803707 1909012 retry.go:31] will retry after 741.40736ms: waiting for domain to come up
	I0414 15:42:36.546572 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:36.547205 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:36.547270 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:36.547183 1909012 retry.go:31] will retry after 1.039019091s: waiting for domain to come up
	I0414 15:42:37.587454 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:37.588056 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:37.588092 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:37.588030 1909012 retry.go:31] will retry after 1.343543316s: waiting for domain to come up
	I0414 15:42:38.933902 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:38.934408 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:38.934499 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:38.934406 1909012 retry.go:31] will retry after 1.727468698s: waiting for domain to come up
	I0414 15:42:40.461045 1907421 out.go:235]   - Generating certificates and keys ...
	I0414 15:42:40.461189 1907421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:42:40.461295 1907421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:42:40.461411 1907421 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:42:40.576540 1907421 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:42:41.022193 1907421 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:42:41.083437 1907421 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:42:41.196088 1907421 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:42:41.196393 1907421 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.305312 1907421 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:42:41.305484 1907421 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.499140 1907421 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:42:41.648257 1907421 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:42:41.792405 1907421 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:42:41.792718 1907421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:42:41.986714 1907421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:42:42.087153 1907421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 15:42:42.240947 1907421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:42:42.386910 1907421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:42:42.522160 1907421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:42:42.523999 1907421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:42:42.528115 1907421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:42:42.574611 1907421 out.go:235]   - Booting up control plane ...
	I0414 15:42:42.574762 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:42:42.574856 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:42:42.574940 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:42:42.575132 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:42:42.575258 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:42:42.575350 1907421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:42:42.720695 1907421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 15:42:42.720861 1907421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 15:42:39.553503 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:41.567599 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:40.664501 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:40.665113 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:40.665156 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:40.665097 1909012 retry.go:31] will retry after 2.255462045s: waiting for domain to come up
	I0414 15:42:42.921827 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:42.922516 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:42.922554 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:42.922480 1909012 retry.go:31] will retry after 2.269647989s: waiting for domain to come up
	I0414 15:42:45.194050 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:45.194621 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:45.194654 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:45.194559 1909012 retry.go:31] will retry after 2.479039637s: waiting for domain to come up
	I0414 15:42:44.113357 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:45.058678 1905530 pod_ready.go:93] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.058714 1905530 pod_ready.go:82] duration metric: took 33.01340484s for pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.058732 1905530 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.061628 1905530 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061664 1905530 pod_ready.go:82] duration metric: took 2.923616ms for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	E0414 15:42:45.061680 1905530 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061691 1905530 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070770 1905530 pod_ready.go:93] pod "etcd-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.070808 1905530 pod_ready.go:82] duration metric: took 9.101557ms for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070826 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079164 1905530 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.079198 1905530 pod_ready.go:82] duration metric: took 8.362407ms for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079213 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087476 1905530 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.087505 1905530 pod_ready.go:82] duration metric: took 8.282442ms for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087518 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249123 1905530 pod_ready.go:93] pod "kube-proxy-cf9hn" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.249155 1905530 pod_ready.go:82] duration metric: took 161.628764ms for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249170 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650160 1905530 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.650266 1905530 pod_ready.go:82] duration metric: took 401.084136ms for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650296 1905530 pod_ready.go:39] duration metric: took 33.615016594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:42:45.650331 1905530 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:42:45.650448 1905530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:42:45.673971 1905530 api_server.go:72] duration metric: took 34.576366052s to wait for apiserver process to appear ...
	I0414 15:42:45.674014 1905530 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:42:45.674039 1905530 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0414 15:42:45.682032 1905530 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0414 15:42:45.683306 1905530 api_server.go:141] control plane version: v1.32.2
	I0414 15:42:45.683334 1905530 api_server.go:131] duration metric: took 9.31155ms to wait for apiserver health ...
	I0414 15:42:45.683345 1905530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:42:45.851783 1905530 system_pods.go:59] 7 kube-system pods found
	I0414 15:42:45.851838 1905530 system_pods.go:61] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:45.851847 1905530 system_pods.go:61] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:45.851855 1905530 system_pods.go:61] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:45.851861 1905530 system_pods.go:61] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:45.851870 1905530 system_pods.go:61] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:45.851875 1905530 system_pods.go:61] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:45.851883 1905530 system_pods.go:61] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:45.851892 1905530 system_pods.go:74] duration metric: took 168.539138ms to wait for pod list to return data ...
	I0414 15:42:45.851906 1905530 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:42:46.051425 1905530 default_sa.go:45] found service account: "default"
	I0414 15:42:46.051460 1905530 default_sa.go:55] duration metric: took 199.54254ms for default service account to be created ...
	I0414 15:42:46.051473 1905530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:42:46.251287 1905530 system_pods.go:86] 7 kube-system pods found
	I0414 15:42:46.251414 1905530 system_pods.go:89] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:46.251431 1905530 system_pods.go:89] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:46.251438 1905530 system_pods.go:89] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:46.251447 1905530 system_pods.go:89] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:46.251454 1905530 system_pods.go:89] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:46.251459 1905530 system_pods.go:89] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:46.251465 1905530 system_pods.go:89] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:46.251476 1905530 system_pods.go:126] duration metric: took 199.99443ms to wait for k8s-apps to be running ...
	I0414 15:42:46.251491 1905530 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:42:46.251557 1905530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:42:46.272907 1905530 system_svc.go:56] duration metric: took 21.403314ms WaitForService to wait for kubelet
	I0414 15:42:46.272947 1905530 kubeadm.go:582] duration metric: took 35.175353213s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:46.272975 1905530 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:42:46.449997 1905530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:42:46.450040 1905530 node_conditions.go:123] node cpu capacity is 2
	I0414 15:42:46.450061 1905530 node_conditions.go:105] duration metric: took 177.079158ms to run NodePressure ...
	I0414 15:42:46.450077 1905530 start.go:241] waiting for startup goroutines ...
	I0414 15:42:46.450088 1905530 start.go:246] waiting for cluster config update ...
	I0414 15:42:46.450103 1905530 start.go:255] writing updated cluster config ...
	I0414 15:42:46.450597 1905530 ssh_runner.go:195] Run: rm -f paused
	I0414 15:42:46.505249 1905530 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:42:46.508181 1905530 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-036922" cluster and "default" namespace by default
	I0414 15:42:43.225629 1907421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.246285ms
	I0414 15:42:43.225795 1907421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 15:42:49.223859 1907421 kubeadm.go:310] [api-check] The API server is healthy after 6.002939425s
	I0414 15:42:49.246703 1907421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 15:42:49.269556 1907421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 15:42:49.315606 1907421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 15:42:49.315885 1907421 kubeadm.go:310] [mark-control-plane] Marking the node flannel-036922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 15:42:49.332520 1907421 kubeadm.go:310] [bootstrap-token] Using token: 6dsy98.vc3wpm9di98p1e2l
	I0414 15:42:47.675403 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:47.675860 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:47.675916 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:47.675831 1909012 retry.go:31] will retry after 3.188398794s: waiting for domain to come up
	I0414 15:42:49.335286 1907421 out.go:235]   - Configuring RBAC rules ...
	I0414 15:42:49.335480 1907421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 15:42:49.342167 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 15:42:49.352554 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 15:42:49.361630 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 15:42:49.366627 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 15:42:49.372335 1907421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 15:42:49.632892 1907421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 15:42:50.092146 1907421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 15:42:50.689823 1907421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 15:42:50.691428 1907421 kubeadm.go:310] 
	I0414 15:42:50.691533 1907421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 15:42:50.691545 1907421 kubeadm.go:310] 
	I0414 15:42:50.691654 1907421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 15:42:50.691666 1907421 kubeadm.go:310] 
	I0414 15:42:50.691717 1907421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 15:42:50.691812 1907421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 15:42:50.691896 1907421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 15:42:50.691906 1907421 kubeadm.go:310] 
	I0414 15:42:50.692009 1907421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 15:42:50.692042 1907421 kubeadm.go:310] 
	I0414 15:42:50.692107 1907421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 15:42:50.692120 1907421 kubeadm.go:310] 
	I0414 15:42:50.692187 1907421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 15:42:50.692272 1907421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 15:42:50.692368 1907421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 15:42:50.692381 1907421 kubeadm.go:310] 
	I0414 15:42:50.692494 1907421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 15:42:50.692586 1907421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 15:42:50.692598 1907421 kubeadm.go:310] 
	I0414 15:42:50.692692 1907421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.692847 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 15:42:50.692890 1907421 kubeadm.go:310] 	--control-plane 
	I0414 15:42:50.692903 1907421 kubeadm.go:310] 
	I0414 15:42:50.693022 1907421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 15:42:50.693031 1907421 kubeadm.go:310] 
	I0414 15:42:50.693144 1907421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.693291 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 15:42:50.693806 1907421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:50.694067 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:50.696952 1907421 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 15:42:50.698346 1907421 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 15:42:50.706416 1907421 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 15:42:50.706438 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 15:42:50.727656 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 15:42:51.287720 1907421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:42:51.287835 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.287871 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-036922 minikube.k8s.io/updated_at=2025_04_14T15_42_51_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=flannel-036922 minikube.k8s.io/primary=true
	I0414 15:42:51.430599 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.430598 1907421 ops.go:34] apiserver oom_adj: -16
	I0414 15:42:51.930825 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.430933 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.931267 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.431500 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.931720 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.431756 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.561136 1907421 kubeadm.go:1113] duration metric: took 3.273384012s to wait for elevateKubeSystemPrivileges
	I0414 15:42:54.561187 1907421 kubeadm.go:394] duration metric: took 14.616077815s to StartCluster
	I0414 15:42:54.561215 1907421 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.561317 1907421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:54.562809 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.563052 1907421 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:54.563065 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 15:42:54.563117 1907421 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:42:54.563242 1907421 addons.go:69] Setting storage-provisioner=true in profile "flannel-036922"
	I0414 15:42:54.563265 1907421 addons.go:238] Setting addon storage-provisioner=true in "flannel-036922"
	I0414 15:42:54.563273 1907421 addons.go:69] Setting default-storageclass=true in profile "flannel-036922"
	I0414 15:42:54.563300 1907421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-036922"
	I0414 15:42:54.563305 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.563335 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:54.563788 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563838 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.563865 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563907 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.566159 1907421 out.go:177] * Verifying Kubernetes components...
	I0414 15:42:54.567701 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:54.582661 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0414 15:42:54.583246 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.583768 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.583805 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.584263 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.584496 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.585593 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0414 15:42:54.586151 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.586695 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.586721 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.587156 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.587767 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.587823 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.588816 1907421 addons.go:238] Setting addon default-storageclass=true in "flannel-036922"
	I0414 15:42:54.588862 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.589169 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.589217 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.605944 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0414 15:42:54.605986 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0414 15:42:54.606442 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.606714 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.607143 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607160 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607282 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607308 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607611 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607729 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607824 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.608193 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.608234 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.610044 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.612210 1907421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:42:50.867819 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:50.868522 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:50.868555 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:50.868467 1909012 retry.go:31] will retry after 3.520845781s: waiting for domain to come up
	I0414 15:42:54.391586 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392265 1908903 main.go:141] libmachine: (bridge-036922) found domain IP: 192.168.61.165
	I0414 15:42:54.392301 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has current primary IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392309 1908903 main.go:141] libmachine: (bridge-036922) reserving static IP address...
	I0414 15:42:54.392694 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find host DHCP lease matching {name: "bridge-036922", mac: "52:54:00:d8:e5:52", ip: "192.168.61.165"} in network mk-bridge-036922
	I0414 15:42:54.493139 1908903 main.go:141] libmachine: (bridge-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:54.493176 1908903 main.go:141] libmachine: (bridge-036922) reserved static IP address 192.168.61.165 for domain bridge-036922
	I0414 15:42:54.493184 1908903 main.go:141] libmachine: (bridge-036922) waiting for SSH...
	I0414 15:42:54.496732 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497256 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.497289 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497438 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH client type: external
	I0414 15:42:54.497470 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa (-rw-------)
	I0414 15:42:54.497515 1908903 main.go:141] libmachine: (bridge-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:54.497529 1908903 main.go:141] libmachine: (bridge-036922) DBG | About to run SSH command:
	I0414 15:42:54.497542 1908903 main.go:141] libmachine: (bridge-036922) DBG | exit 0
	I0414 15:42:54.628504 1908903 main.go:141] libmachine: (bridge-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:54.628809 1908903 main.go:141] libmachine: (bridge-036922) KVM machine creation complete
	I0414 15:42:54.629054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:54.629681 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630072 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630332 1908903 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:54.630347 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:42:54.632867 1908903 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:54.632882 1908903 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:54.632889 1908903 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:54.632896 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.637477 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638308 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.638311 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.638423 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638557 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638771 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638949 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.639184 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.639458 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.639474 1908903 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:54.750695 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:54.750726 1908903 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:54.750740 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.754154 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.754756 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.754859 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.755083 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.755305 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755456 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755636 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.755854 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.756066 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.756078 1908903 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:54.871796 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:54.871901 1908903 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:54.871917 1908903 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:54.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872246 1908903 buildroot.go:166] provisioning hostname "bridge-036922"
	I0414 15:42:54.872272 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872483 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.875743 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876125 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.876156 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876386 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.876633 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876832 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876998 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.877181 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.877502 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.877523 1908903 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-036922 && echo "bridge-036922" | sudo tee /etc/hostname
	I0414 15:42:55.000057 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-036922
	
	I0414 15:42:55.000093 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.003879 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004436 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.004467 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004819 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.005054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005254 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005507 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.005701 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.005995 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.006031 1908903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:55.128677 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:55.128716 1908903 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:55.128743 1908903 buildroot.go:174] setting up certificates
	I0414 15:42:55.128772 1908903 provision.go:84] configureAuth start
	I0414 15:42:55.128791 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:55.129195 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.132674 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133237 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.133295 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133459 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.137559 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138052 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.138085 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138322 1908903 provision.go:143] copyHostCerts
	I0414 15:42:55.138401 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:55.138427 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:55.138499 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:55.138639 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:55.138652 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:55.138695 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:55.138851 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:55.138863 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:55.138888 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:55.139002 1908903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.bridge-036922 san=[127.0.0.1 192.168.61.165 bridge-036922 localhost minikube]
	I0414 15:42:55.169326 1908903 provision.go:177] copyRemoteCerts
	I0414 15:42:55.169402 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:55.169429 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.172809 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173239 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.173270 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173706 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.174030 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.174255 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.174485 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.261123 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:55.288685 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 15:42:55.316648 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:55.346718 1908903 provision.go:87] duration metric: took 217.897994ms to configureAuth
	I0414 15:42:55.346759 1908903 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:55.347050 1908903 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:55.347158 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.350409 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.350855 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.350888 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.351139 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.351328 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351559 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351722 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.351895 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.352172 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.352196 1908903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:54.613578 1907421 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:54.613601 1907421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:42:54.613625 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.617705 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618134 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.618154 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618488 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.618717 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.618939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.619103 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.627890 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0414 15:42:54.628364 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.628827 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.628849 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.629832 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.630200 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.632595 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.633055 1907421 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:54.633074 1907421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:42:54.633096 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.637402 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.637882 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.637912 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.638627 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.638825 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638994 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.639153 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.824401 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:54.824485 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 15:42:54.848222 1907421 node_ready.go:35] waiting up to 15m0s for node "flannel-036922" to be "Ready" ...
	I0414 15:42:55.016349 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:55.024812 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:55.334347 1907421 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 15:42:55.469300 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469338 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.469832 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.469875 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.469885 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.469894 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469915 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.470211 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.470226 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.470243 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494538 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.494593 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.494941 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.494960 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494989 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.843405 1907421 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-036922" context rescaled to 1 replicas
	I0414 15:42:55.852113 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852145 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852433 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852455 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852467 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852475 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852855 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852876 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852900 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.855070 1907421 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 15:42:55.609672 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:55.609708 1908903 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:55.609720 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetURL
	I0414 15:42:55.611018 1908903 main.go:141] libmachine: (bridge-036922) DBG | using libvirt version 6000000
	I0414 15:42:55.613407 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.613780 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.613807 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.614012 1908903 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:55.614034 1908903 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:55.614045 1908903 client.go:171] duration metric: took 22.962392414s to LocalClient.Create
	I0414 15:42:55.614118 1908903 start.go:167] duration metric: took 22.96254203s to libmachine.API.Create "bridge-036922"
	I0414 15:42:55.614140 1908903 start.go:293] postStartSetup for "bridge-036922" (driver="kvm2")
	I0414 15:42:55.614154 1908903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:55.614196 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.614557 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:55.614591 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.617351 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617730 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.617783 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617881 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.618095 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.618279 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.618457 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.706758 1908903 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:55.711737 1908903 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:55.711775 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:55.711864 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:55.711967 1908903 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:55.712104 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:55.724874 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:55.754120 1908903 start.go:296] duration metric: took 139.933679ms for postStartSetup
	I0414 15:42:55.754193 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:55.754932 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.757984 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758267 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.758297 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758631 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:55.758849 1908903 start.go:128] duration metric: took 23.13086309s to createHost
	I0414 15:42:55.758880 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.761734 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762225 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.762256 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762495 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.762688 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.762944 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.763100 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.763340 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.763660 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.763680 1908903 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:55.871836 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645375.840729325
	
	I0414 15:42:55.871865 1908903 fix.go:216] guest clock: 1744645375.840729325
	I0414 15:42:55.871875 1908903 fix.go:229] Guest: 2025-04-14 15:42:55.840729325 +0000 UTC Remote: 2025-04-14 15:42:55.758864102 +0000 UTC m=+35.409485075 (delta=81.865223ms)
	I0414 15:42:55.871904 1908903 fix.go:200] guest clock delta is within tolerance: 81.865223ms
	I0414 15:42:55.871910 1908903 start.go:83] releasing machines lock for "bridge-036922", held for 23.244108969s
	I0414 15:42:55.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.872246 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.875616 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876069 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.876099 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876330 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.876969 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877174 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877292 1908903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:55.877339 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.877479 1908903 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:55.877515 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.880495 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880821 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880916 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.880943 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881164 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881301 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.881322 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881353 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881480 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.881545 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881643 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.881712 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881911 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.882048 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.986199 1908903 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:55.993392 1908903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:56.164978 1908903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:56.172178 1908903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:56.172282 1908903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:56.197933 1908903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:56.197965 1908903 start.go:495] detecting cgroup driver to use...
	I0414 15:42:56.198045 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:56.220424 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:56.238850 1908903 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:56.238925 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:56.258562 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:56.281276 1908903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:56.446192 1908903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:56.624912 1908903 docker.go:233] disabling docker service ...
	I0414 15:42:56.624983 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:56.646632 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:56.661759 1908903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:56.821178 1908903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:56.960834 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:56.976444 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:57.000020 1908903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:57.000107 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.012798 1908903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:57.012878 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.024940 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.037307 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.049273 1908903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:57.061679 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.073870 1908903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.092514 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.104956 1908903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:57.115727 1908903 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:57.115813 1908903 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:57.133078 1908903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:57.144441 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:57.281237 1908903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:57.385608 1908903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:57.385708 1908903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:57.391600 1908903 start.go:563] Will wait 60s for crictl version
	I0414 15:42:57.391684 1908903 ssh_runner.go:195] Run: which crictl
	I0414 15:42:57.396066 1908903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:57.436559 1908903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:57.436662 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.466242 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.506266 1908903 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:55.856560 1907421 addons.go:514] duration metric: took 1.293454428s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 15:42:56.852426 1907421 node_ready.go:53] node "flannel-036922" has status "Ready":"False"
	I0414 15:42:59.215921 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:42:59.216197 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:59.216228 1898413 kubeadm.go:310] 
	I0414 15:42:59.216283 1898413 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:42:59.216336 1898413 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:42:59.216342 1898413 kubeadm.go:310] 
	I0414 15:42:59.216389 1898413 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:42:59.216433 1898413 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:42:59.216581 1898413 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:42:59.216592 1898413 kubeadm.go:310] 
	I0414 15:42:59.216725 1898413 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:42:59.216770 1898413 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:42:59.216818 1898413 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:42:59.216822 1898413 kubeadm.go:310] 
	I0414 15:42:59.216907 1898413 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:42:59.217006 1898413 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:42:59.217015 1898413 kubeadm.go:310] 
	I0414 15:42:59.217187 1898413 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:42:59.217303 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:42:59.217409 1898413 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:42:59.217503 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:42:59.217511 1898413 kubeadm.go:310] 
	I0414 15:42:59.219259 1898413 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:59.219407 1898413 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:42:59.219514 1898413 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:42:59.220159 1898413 kubeadm.go:394] duration metric: took 8m0.569569368s to StartCluster
	I0414 15:42:59.220230 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:42:59.220304 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:42:59.296348 1898413 cri.go:89] found id: ""
	I0414 15:42:59.296381 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.296393 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:42:59.296403 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:42:59.296511 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:42:59.357668 1898413 cri.go:89] found id: ""
	I0414 15:42:59.357701 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.357713 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:42:59.357720 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:42:59.357797 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:42:59.408582 1898413 cri.go:89] found id: ""
	I0414 15:42:59.408613 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.408621 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:42:59.408627 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:42:59.408702 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:42:59.457402 1898413 cri.go:89] found id: ""
	I0414 15:42:59.457438 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.457449 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:42:59.457457 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:42:59.457530 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:42:59.508543 1898413 cri.go:89] found id: ""
	I0414 15:42:59.508601 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.508613 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:42:59.508621 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:42:59.508691 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:42:59.557213 1898413 cri.go:89] found id: ""
	I0414 15:42:59.557250 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.557262 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:42:59.557270 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:42:59.557343 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:42:59.607994 1898413 cri.go:89] found id: ""
	I0414 15:42:59.608023 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.608048 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:42:59.608057 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:42:59.608129 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:42:59.657459 1898413 cri.go:89] found id: ""
	I0414 15:42:59.657494 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.657507 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:42:59.657525 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:42:59.657549 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:42:59.723160 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:42:59.723223 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:42:59.743367 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:42:59.743418 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:42:59.876644 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:42:59.876695 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:42:59.876713 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:43:00.032948 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:43:00.032994 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:43:00.086613 1898413 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:43:00.086686 1898413 out.go:270] * 
	W0414 15:43:00.086782 1898413 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.086809 1898413 out.go:270] * 
	W0414 15:43:00.087917 1898413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:43:00.091413 1898413 out.go:201] 
	W0414 15:43:00.092767 1898413 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.092825 1898413 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:43:00.092861 1898413 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:43:00.094446 1898413 out.go:201] 
	I0414 15:42:57.507650 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:57.510669 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511148 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:57.511176 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511409 1908903 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:57.516092 1908903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:57.529590 1908903 kubeadm.go:883] updating cluster {Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:57.529766 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:57.529845 1908903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:57.572139 1908903 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:57.572227 1908903 ssh_runner.go:195] Run: which lz4
	I0414 15:42:57.576627 1908903 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:57.581291 1908903 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:57.581343 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:59.289654 1908903 crio.go:462] duration metric: took 1.713065895s to copy over tarball
	I0414 15:42:59.289872 1908903 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:42:59.351809 1907421 node_ready.go:53] node "flannel-036922" has status "Ready":"False"
	I0414 15:43:00.852154 1907421 node_ready.go:49] node "flannel-036922" has status "Ready":"True"
	I0414 15:43:00.852190 1907421 node_ready.go:38] duration metric: took 6.003920766s for node "flannel-036922" to be "Ready" ...
	I0414 15:43:00.852202 1907421 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:00.855688 1907421 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:03.053356 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:02.349077 1908903 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059144774s)
	I0414 15:43:02.349125 1908903 crio.go:469] duration metric: took 3.05935727s to extract the tarball
	I0414 15:43:02.349133 1908903 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:43:02.391460 1908903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:43:02.441459 1908903 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 15:43:02.441495 1908903 cache_images.go:84] Images are preloaded, skipping loading
	I0414 15:43:02.441507 1908903 kubeadm.go:934] updating node { 192.168.61.165 8443 v1.32.2 crio true true} ...
	I0414 15:43:02.441660 1908903 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-036922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 15:43:02.441763 1908903 ssh_runner.go:195] Run: crio config
	I0414 15:43:02.502883 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:43:02.502919 1908903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:43:02.502962 1908903 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.165 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-036922 NodeName:bridge-036922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:43:02.503126 1908903 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-036922"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.165"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.165"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:43:02.503207 1908903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 15:43:02.516388 1908903 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:43:02.516457 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:43:02.527106 1908903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0414 15:43:02.545740 1908903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:43:02.564255 1908903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0414 15:43:02.582628 1908903 ssh_runner.go:195] Run: grep 192.168.61.165	control-plane.minikube.internal$ /etc/hosts
	I0414 15:43:02.587032 1908903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:43:02.601453 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:43:02.733859 1908903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:43:02.752631 1908903 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922 for IP: 192.168.61.165
	I0414 15:43:02.752661 1908903 certs.go:194] generating shared ca certs ...
	I0414 15:43:02.752689 1908903 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:02.752885 1908903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:43:02.752950 1908903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:43:02.752967 1908903 certs.go:256] generating profile certs ...
	I0414 15:43:02.753043 1908903 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key
	I0414 15:43:02.753060 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt with IP's: []
	I0414 15:43:03.058289 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt ...
	I0414 15:43:03.058339 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: {Name:mk7351040ba2e8c3a4ca5b96eb26d95a2d5977ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.058574 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key ...
	I0414 15:43:03.058591 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key: {Name:mkd34c01b2eee2dc3fc1717df5b3dc46ce680363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.058702 1908903 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893
	I0414 15:43:03.058718 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.165]
	I0414 15:43:03.689440 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 ...
	I0414 15:43:03.689480 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893: {Name:mkd5b14756191631834da95f41b38a940cf31349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.689692 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893 ...
	I0414 15:43:03.689717 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893: {Name:mkf6fff86e315dd01269aced9364162e3eff934a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.689822 1908903 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt
	I0414 15:43:03.689918 1908903 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key
	I0414 15:43:03.689995 1908903 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key
	I0414 15:43:03.690014 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt with IP's: []
	I0414 15:43:03.794322 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt ...
	I0414 15:43:03.794351 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt: {Name:mk8f147274fd78d695cbf09159a830835e63cf56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.794521 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key ...
	I0414 15:43:03.794536 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key: {Name:mk730e2e16f2bcbe4155bbe3689536f15e6442c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.794712 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:43:03.794750 1908903 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:43:03.794776 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:43:03.794812 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:43:03.794838 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:43:03.794859 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:43:03.794898 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:43:03.795467 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:43:03.825320 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:43:03.854532 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:43:03.887606 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:43:03.921213 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 15:43:03.950111 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:43:03.981773 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:43:04.011109 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:43:04.039409 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:43:04.064525 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:43:04.092407 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:43:04.118044 1908903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:43:04.139488 1908903 ssh_runner.go:195] Run: openssl version
	I0414 15:43:04.146985 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:43:04.159472 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.164659 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.164739 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.171807 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:43:04.188160 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:43:04.205038 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.211611 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.211681 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.218747 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:43:04.236820 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:43:04.263022 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.273910 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.273985 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.287756 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:43:04.306774 1908903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:43:04.311741 1908903 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:43:04.311807 1908903 kubeadm.go:392] StartCluster: {Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:43:04.311903 1908903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:43:04.311973 1908903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:43:04.357659 1908903 cri.go:89] found id: ""
	I0414 15:43:04.357758 1908903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:43:04.372556 1908903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:43:04.384120 1908903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:43:04.396721 1908903 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:43:04.396749 1908903 kubeadm.go:157] found existing configuration files:
	
	I0414 15:43:04.396811 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:43:04.407452 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:43:04.407542 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:43:04.418929 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:43:04.430627 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:43:04.430717 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:43:04.442139 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:43:04.453146 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:43:04.453222 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:43:04.464848 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:43:04.479067 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:43:04.479145 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:43:04.491976 1908903 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:43:04.553900 1908903 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 15:43:04.554017 1908903 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:43:04.672215 1908903 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:43:04.672355 1908903 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:43:04.672497 1908903 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 15:43:04.688408 1908903 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:43:04.769682 1908903 out.go:235]   - Generating certificates and keys ...
	I0414 15:43:04.769796 1908903 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:43:04.769875 1908903 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:43:04.820739 1908903 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:43:05.150004 1908903 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:43:05.206561 1908903 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:43:05.428733 1908903 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:43:05.776550 1908903 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:43:05.776721 1908903 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-036922 localhost] and IPs [192.168.61.165 127.0.0.1 ::1]
	I0414 15:43:06.204857 1908903 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:43:06.205015 1908903 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-036922 localhost] and IPs [192.168.61.165 127.0.0.1 ::1]
	I0414 15:43:06.375999 1908903 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:43:06.499159 1908903 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:43:06.580941 1908903 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:43:06.581186 1908903 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:43:06.679071 1908903 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:43:06.835883 1908903 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 15:43:06.969239 1908903 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:43:07.047193 1908903 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:43:07.515283 1908903 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:43:07.517979 1908903 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:43:07.520948 1908903 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:43:05.362702 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:07.363565 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:07.522722 1908903 out.go:235]   - Booting up control plane ...
	I0414 15:43:07.522853 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:43:07.522975 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:43:07.523079 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:43:07.540604 1908903 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:43:07.548217 1908903 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:43:07.548314 1908903 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:43:07.720338 1908903 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 15:43:07.720487 1908903 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 15:43:08.221640 1908903 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690509ms
	I0414 15:43:08.221744 1908903 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 15:43:09.363642 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:11.862669 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:13.722997 1908903 kubeadm.go:310] [api-check] The API server is healthy after 5.502696369s
	I0414 15:43:13.742223 1908903 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 15:43:13.757863 1908903 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 15:43:13.795334 1908903 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 15:43:13.795643 1908903 kubeadm.go:310] [mark-control-plane] Marking the node bridge-036922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 15:43:13.812020 1908903 kubeadm.go:310] [bootstrap-token] Using token: c2gb67.laeaummb5gd4egy3
	I0414 15:43:13.813488 1908903 out.go:235]   - Configuring RBAC rules ...
	I0414 15:43:13.813629 1908903 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 15:43:13.828250 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 15:43:13.852437 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 15:43:13.858931 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 15:43:13.863903 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 15:43:13.871061 1908903 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 15:43:14.132072 1908903 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 15:43:14.585461 1908903 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 15:43:15.130437 1908903 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 15:43:15.131975 1908903 kubeadm.go:310] 
	I0414 15:43:15.132090 1908903 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 15:43:15.132109 1908903 kubeadm.go:310] 
	I0414 15:43:15.132246 1908903 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 15:43:15.132265 1908903 kubeadm.go:310] 
	I0414 15:43:15.132300 1908903 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 15:43:15.132378 1908903 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 15:43:15.132458 1908903 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 15:43:15.132468 1908903 kubeadm.go:310] 
	I0414 15:43:15.132540 1908903 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 15:43:15.132550 1908903 kubeadm.go:310] 
	I0414 15:43:15.132636 1908903 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 15:43:15.132650 1908903 kubeadm.go:310] 
	I0414 15:43:15.132726 1908903 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 15:43:15.132825 1908903 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 15:43:15.132918 1908903 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 15:43:15.132927 1908903 kubeadm.go:310] 
	I0414 15:43:15.133043 1908903 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 15:43:15.133158 1908903 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 15:43:15.133178 1908903 kubeadm.go:310] 
	I0414 15:43:15.133292 1908903 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2gb67.laeaummb5gd4egy3 \
	I0414 15:43:15.133428 1908903 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 15:43:15.133451 1908903 kubeadm.go:310] 	--control-plane 
	I0414 15:43:15.133455 1908903 kubeadm.go:310] 
	I0414 15:43:15.133587 1908903 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 15:43:15.133599 1908903 kubeadm.go:310] 
	I0414 15:43:15.133694 1908903 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2gb67.laeaummb5gd4egy3 \
	I0414 15:43:15.133862 1908903 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 15:43:15.134716 1908903 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:43:15.134833 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:43:15.137761 1908903 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 15:43:15.139072 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 15:43:15.150727 1908903 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 15:43:15.172753 1908903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:43:15.172843 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:15.172878 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-036922 minikube.k8s.io/updated_at=2025_04_14T15_43_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=bridge-036922 minikube.k8s.io/primary=true
	I0414 15:43:15.337474 1908903 ops.go:34] apiserver oom_adj: -16
	I0414 15:43:15.337603 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:13.862759 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:15.864154 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:17.367663 1907421 pod_ready.go:93] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.367697 1907421 pod_ready.go:82] duration metric: took 16.511977863s for pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.367714 1907421 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.379462 1907421 pod_ready.go:93] pod "etcd-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.379493 1907421 pod_ready.go:82] duration metric: took 11.770579ms for pod "etcd-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.379508 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.386865 1907421 pod_ready.go:93] pod "kube-apiserver-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.386898 1907421 pod_ready.go:82] duration metric: took 7.382173ms for pod "kube-apiserver-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.386913 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.392211 1907421 pod_ready.go:93] pod "kube-controller-manager-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.392234 1907421 pod_ready.go:82] duration metric: took 5.31374ms for pod "kube-controller-manager-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.392243 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7zd42" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.397287 1907421 pod_ready.go:93] pod "kube-proxy-7zd42" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.397312 1907421 pod_ready.go:82] duration metric: took 5.062669ms for pod "kube-proxy-7zd42" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.397322 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.759889 1907421 pod_ready.go:93] pod "kube-scheduler-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.759918 1907421 pod_ready.go:82] duration metric: took 362.587262ms for pod "kube-scheduler-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.759930 1907421 pod_ready.go:39] duration metric: took 16.907709508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:17.759949 1907421 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:43:17.760002 1907421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:43:17.779080 1907421 api_server.go:72] duration metric: took 23.215997595s to wait for apiserver process to appear ...
	I0414 15:43:17.779114 1907421 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:43:17.779133 1907421 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0414 15:43:17.786736 1907421 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0414 15:43:17.787921 1907421 api_server.go:141] control plane version: v1.32.2
	I0414 15:43:17.787946 1907421 api_server.go:131] duration metric: took 8.826568ms to wait for apiserver health ...
	I0414 15:43:17.787956 1907421 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:43:17.961903 1907421 system_pods.go:59] 7 kube-system pods found
	I0414 15:43:17.961950 1907421 system_pods.go:61] "coredns-668d6bf9bc-8lknp" [06667bb5-e553-4c4f-abf5-d8c01729ea1d] Running
	I0414 15:43:17.961956 1907421 system_pods.go:61] "etcd-flannel-036922" [c2a29905-84cb-4e69-8a15-0525ae990e24] Running
	I0414 15:43:17.961959 1907421 system_pods.go:61] "kube-apiserver-flannel-036922" [d9336840-d608-4c31-bf23-e479553bf106] Running
	I0414 15:43:17.961964 1907421 system_pods.go:61] "kube-controller-manager-flannel-036922" [31280388-ed00-4b11-bc68-0cafdecc33e6] Running
	I0414 15:43:17.961971 1907421 system_pods.go:61] "kube-proxy-7zd42" [671465e4-9ea3-4a36-8cc1-5a7c303837b2] Running
	I0414 15:43:17.961975 1907421 system_pods.go:61] "kube-scheduler-flannel-036922" [cc061af0-8dee-4822-8f03-17ab374c2c08] Running
	I0414 15:43:17.961979 1907421 system_pods.go:61] "storage-provisioner" [d5b335f4-e0d4-48bb-9aa8-9ee2a9619b48] Running
	I0414 15:43:17.961987 1907421 system_pods.go:74] duration metric: took 174.024277ms to wait for pod list to return data ...
	I0414 15:43:17.962002 1907421 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:43:15.837661 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:16.338010 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:16.838499 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:17.338708 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:17.838651 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:18.338085 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:18.435813 1908903 kubeadm.go:1113] duration metric: took 3.263041868s to wait for elevateKubeSystemPrivileges
	I0414 15:43:18.435864 1908903 kubeadm.go:394] duration metric: took 14.12406212s to StartCluster
	I0414 15:43:18.435891 1908903 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:18.435976 1908903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:43:18.437104 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:18.437365 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 15:43:18.437363 1908903 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:43:18.437461 1908903 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:43:18.437538 1908903 addons.go:69] Setting storage-provisioner=true in profile "bridge-036922"
	I0414 15:43:18.437559 1908903 addons.go:238] Setting addon storage-provisioner=true in "bridge-036922"
	I0414 15:43:18.437605 1908903 host.go:66] Checking if "bridge-036922" exists ...
	I0414 15:43:18.437618 1908903 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:43:18.437553 1908903 addons.go:69] Setting default-storageclass=true in profile "bridge-036922"
	I0414 15:43:18.437690 1908903 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-036922"
	I0414 15:43:18.438070 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.438102 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.438141 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.438106 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.439155 1908903 out.go:177] * Verifying Kubernetes components...
	I0414 15:43:18.440542 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:43:18.455641 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0414 15:43:18.456194 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.456697 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.456719 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.457142 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.457599 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.457622 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.460636 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0414 15:43:18.461223 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.461758 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.461782 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.462163 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.462386 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.466166 1908903 addons.go:238] Setting addon default-storageclass=true in "bridge-036922"
	I0414 15:43:18.466211 1908903 host.go:66] Checking if "bridge-036922" exists ...
	I0414 15:43:18.466628 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.466677 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.475648 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0414 15:43:18.476519 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.477273 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.477298 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.477770 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.477988 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.480218 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:43:18.482224 1908903 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:43:18.162281 1907421 default_sa.go:45] found service account: "default"
	I0414 15:43:18.162314 1907421 default_sa.go:55] duration metric: took 200.300594ms for default service account to be created ...
	I0414 15:43:18.162327 1907421 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:43:18.361676 1907421 system_pods.go:86] 7 kube-system pods found
	I0414 15:43:18.361730 1907421 system_pods.go:89] "coredns-668d6bf9bc-8lknp" [06667bb5-e553-4c4f-abf5-d8c01729ea1d] Running
	I0414 15:43:18.361739 1907421 system_pods.go:89] "etcd-flannel-036922" [c2a29905-84cb-4e69-8a15-0525ae990e24] Running
	I0414 15:43:18.361745 1907421 system_pods.go:89] "kube-apiserver-flannel-036922" [d9336840-d608-4c31-bf23-e479553bf106] Running
	I0414 15:43:18.361757 1907421 system_pods.go:89] "kube-controller-manager-flannel-036922" [31280388-ed00-4b11-bc68-0cafdecc33e6] Running
	I0414 15:43:18.361762 1907421 system_pods.go:89] "kube-proxy-7zd42" [671465e4-9ea3-4a36-8cc1-5a7c303837b2] Running
	I0414 15:43:18.361767 1907421 system_pods.go:89] "kube-scheduler-flannel-036922" [cc061af0-8dee-4822-8f03-17ab374c2c08] Running
	I0414 15:43:18.361780 1907421 system_pods.go:89] "storage-provisioner" [d5b335f4-e0d4-48bb-9aa8-9ee2a9619b48] Running
	I0414 15:43:18.361790 1907421 system_pods.go:126] duration metric: took 199.454049ms to wait for k8s-apps to be running ...
	I0414 15:43:18.361798 1907421 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:43:18.361862 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:43:18.378922 1907421 system_svc.go:56] duration metric: took 17.110809ms WaitForService to wait for kubelet
	I0414 15:43:18.378962 1907421 kubeadm.go:582] duration metric: took 23.815883488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:43:18.378990 1907421 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:43:18.561117 1907421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:43:18.561152 1907421 node_conditions.go:123] node cpu capacity is 2
	I0414 15:43:18.561172 1907421 node_conditions.go:105] duration metric: took 182.174643ms to run NodePressure ...
	I0414 15:43:18.561187 1907421 start.go:241] waiting for startup goroutines ...
	I0414 15:43:18.561195 1907421 start.go:246] waiting for cluster config update ...
	I0414 15:43:18.561210 1907421 start.go:255] writing updated cluster config ...
	I0414 15:43:18.561585 1907421 ssh_runner.go:195] Run: rm -f paused
	I0414 15:43:18.616899 1907421 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:43:18.619008 1907421 out.go:177] * Done! kubectl is now configured to use "flannel-036922" cluster and "default" namespace by default
	I0414 15:43:18.483663 1908903 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:43:18.483687 1908903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:43:18.483716 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:43:18.487404 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.487941 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:43:18.487975 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.488288 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:43:18.488518 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:43:18.488630 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37421
	I0414 15:43:18.488862 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:43:18.489039 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:43:18.489197 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.489670 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.489697 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.490323 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.490968 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.491008 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.507682 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0414 15:43:18.508081 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.508551 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.508583 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.509071 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.509269 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.511125 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:43:18.511478 1908903 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:43:18.511517 1908903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:43:18.511540 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:43:18.514530 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.515112 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:43:18.515146 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.515270 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:43:18.515457 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:43:18.515631 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:43:18.515798 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:43:18.606562 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 15:43:18.657948 1908903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:43:18.790591 1908903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:43:18.828450 1908903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:43:19.267941 1908903 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 15:43:19.268837 1908903 node_ready.go:35] waiting up to 15m0s for node "bridge-036922" to be "Ready" ...
	I0414 15:43:19.321139 1908903 node_ready.go:49] node "bridge-036922" has status "Ready":"True"
	I0414 15:43:19.321167 1908903 node_ready.go:38] duration metric: took 52.287821ms for node "bridge-036922" to be "Ready" ...
	I0414 15:43:19.321178 1908903 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:19.337686 1908903 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:19.349044 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.349081 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.349385 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.349403 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.349415 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.349423 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.349687 1908903 main.go:141] libmachine: (bridge-036922) DBG | Closing plugin on server side
	I0414 15:43:19.349706 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.349721 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.430028 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.430059 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.430405 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.430426 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.738624 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.738657 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.738999 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.739023 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.739028 1908903 main.go:141] libmachine: (bridge-036922) DBG | Closing plugin on server side
	I0414 15:43:19.739033 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.739051 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.740924 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.740945 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.743503 1908903 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 15:43:19.744467 1908903 addons.go:514] duration metric: took 1.307001565s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 15:43:19.772839 1908903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-036922" context rescaled to 1 replicas
	I0414 15:43:21.344207 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:23.843345 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:25.844108 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:28.344124 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:30.352697 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:32.843340 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:34.845218 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:36.845518 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:39.345615 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:41.844911 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:44.344880 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:46.844497 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:49.345176 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:51.843480 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:53.843574 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:55.845252 1908903 pod_ready.go:93] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.845283 1908903 pod_ready.go:82] duration metric: took 36.507553933s for pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.845297 1908903 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.847823 1908903 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-sf5z2" not found
	I0414 15:43:55.847853 1908903 pod_ready.go:82] duration metric: took 2.54674ms for pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace to be "Ready" ...
	E0414 15:43:55.847867 1908903 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-sf5z2" not found
	I0414 15:43:55.847875 1908903 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.852708 1908903 pod_ready.go:93] pod "etcd-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.852735 1908903 pod_ready.go:82] duration metric: took 4.851802ms for pod "etcd-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.852747 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.857917 1908903 pod_ready.go:93] pod "kube-apiserver-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.857941 1908903 pod_ready.go:82] duration metric: took 5.186792ms for pod "kube-apiserver-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.857954 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.862028 1908903 pod_ready.go:93] pod "kube-controller-manager-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.862052 1908903 pod_ready.go:82] duration metric: took 4.089611ms for pod "kube-controller-manager-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.862066 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-m4qjw" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.042402 1908903 pod_ready.go:93] pod "kube-proxy-m4qjw" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:56.042429 1908903 pod_ready.go:82] duration metric: took 180.35577ms for pod "kube-proxy-m4qjw" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.042439 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.445374 1908903 pod_ready.go:93] pod "kube-scheduler-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:56.445414 1908903 pod_ready.go:82] duration metric: took 402.96709ms for pod "kube-scheduler-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.445428 1908903 pod_ready.go:39] duration metric: took 37.124235593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:56.445456 1908903 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:43:56.445532 1908903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:43:56.460913 1908903 api_server.go:72] duration metric: took 38.023515623s to wait for apiserver process to appear ...
	I0414 15:43:56.460945 1908903 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:43:56.460966 1908903 api_server.go:253] Checking apiserver healthz at https://192.168.61.165:8443/healthz ...
	I0414 15:43:56.465387 1908903 api_server.go:279] https://192.168.61.165:8443/healthz returned 200:
	ok
	I0414 15:43:56.466323 1908903 api_server.go:141] control plane version: v1.32.2
	I0414 15:43:56.466347 1908903 api_server.go:131] duration metric: took 5.396979ms to wait for apiserver health ...
	I0414 15:43:56.466356 1908903 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:43:56.642591 1908903 system_pods.go:59] 7 kube-system pods found
	I0414 15:43:56.642633 1908903 system_pods.go:61] "coredns-668d6bf9bc-htdqv" [c857ef30-5813-45fe-b25e-3baa663ae97e] Running
	I0414 15:43:56.642641 1908903 system_pods.go:61] "etcd-bridge-036922" [db3aa367-4ce7-46f8-9836-5dd5993c5db9] Running
	I0414 15:43:56.642647 1908903 system_pods.go:61] "kube-apiserver-bridge-036922" [89106101-c303-4d87-be62-98869183e702] Running
	I0414 15:43:56.642653 1908903 system_pods.go:61] "kube-controller-manager-bridge-036922" [03e28ccd-fe05-4c06-a146-f732f20cfd9f] Running
	I0414 15:43:56.642657 1908903 system_pods.go:61] "kube-proxy-m4qjw" [92068c58-57c5-4fdb-a990-24376f951c61] Running
	I0414 15:43:56.642662 1908903 system_pods.go:61] "kube-scheduler-bridge-036922" [7101510d-cae7-4e98-b155-044417258287] Running
	I0414 15:43:56.642667 1908903 system_pods.go:61] "storage-provisioner" [a7921eed-0433-4ab8-a62a-1c3d799d30ce] Running
	I0414 15:43:56.642676 1908903 system_pods.go:74] duration metric: took 176.312498ms to wait for pod list to return data ...
	I0414 15:43:56.642689 1908903 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:43:56.844349 1908903 default_sa.go:45] found service account: "default"
	I0414 15:43:56.844380 1908903 default_sa.go:55] duration metric: took 201.684045ms for default service account to be created ...
	I0414 15:43:56.844392 1908903 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:43:57.042251 1908903 system_pods.go:86] 7 kube-system pods found
	I0414 15:43:57.042294 1908903 system_pods.go:89] "coredns-668d6bf9bc-htdqv" [c857ef30-5813-45fe-b25e-3baa663ae97e] Running
	I0414 15:43:57.042303 1908903 system_pods.go:89] "etcd-bridge-036922" [db3aa367-4ce7-46f8-9836-5dd5993c5db9] Running
	I0414 15:43:57.042311 1908903 system_pods.go:89] "kube-apiserver-bridge-036922" [89106101-c303-4d87-be62-98869183e702] Running
	I0414 15:43:57.042316 1908903 system_pods.go:89] "kube-controller-manager-bridge-036922" [03e28ccd-fe05-4c06-a146-f732f20cfd9f] Running
	I0414 15:43:57.042321 1908903 system_pods.go:89] "kube-proxy-m4qjw" [92068c58-57c5-4fdb-a990-24376f951c61] Running
	I0414 15:43:57.042326 1908903 system_pods.go:89] "kube-scheduler-bridge-036922" [7101510d-cae7-4e98-b155-044417258287] Running
	I0414 15:43:57.042332 1908903 system_pods.go:89] "storage-provisioner" [a7921eed-0433-4ab8-a62a-1c3d799d30ce] Running
	I0414 15:43:57.042342 1908903 system_pods.go:126] duration metric: took 197.94205ms to wait for k8s-apps to be running ...
	I0414 15:43:57.042352 1908903 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:43:57.042435 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:43:57.057742 1908903 system_svc.go:56] duration metric: took 15.376538ms WaitForService to wait for kubelet
	I0414 15:43:57.057778 1908903 kubeadm.go:582] duration metric: took 38.620384323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:43:57.057800 1908903 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:43:57.242521 1908903 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:43:57.242564 1908903 node_conditions.go:123] node cpu capacity is 2
	I0414 15:43:57.242579 1908903 node_conditions.go:105] duration metric: took 184.775007ms to run NodePressure ...
	I0414 15:43:57.242594 1908903 start.go:241] waiting for startup goroutines ...
	I0414 15:43:57.242600 1908903 start.go:246] waiting for cluster config update ...
	I0414 15:43:57.242612 1908903 start.go:255] writing updated cluster config ...
	I0414 15:43:57.242897 1908903 ssh_runner.go:195] Run: rm -f paused
	I0414 15:43:57.294073 1908903 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:43:57.297439 1908903 out.go:177] * Done! kubectl is now configured to use "bridge-036922" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.249308917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645923249274887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4620129e-a667-4cfe-a2cb-23543806f628 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.250182496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8de1eaa5-db96-4758-a242-4c1680075464 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.250295262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8de1eaa5-db96-4758-a242-4c1680075464 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.250356530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8de1eaa5-db96-4758-a242-4c1680075464 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.288906043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8eaf9e06-4848-4fe8-8d07-e261b09850c3 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.289111332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8eaf9e06-4848-4fe8-8d07-e261b09850c3 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.290901824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c445be40-af04-4839-b111-d06bfad1e498 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.291484325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645923291451228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c445be40-af04-4839-b111-d06bfad1e498 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.292353769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=748e37de-c8b5-420e-a84b-8061a962c693 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.292425080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=748e37de-c8b5-420e-a84b-8061a962c693 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.292481070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=748e37de-c8b5-420e-a84b-8061a962c693 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.328808629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00634ff4-09e4-424e-8bb4-5b5681ccb2db name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.328918801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00634ff4-09e4-424e-8bb4-5b5681ccb2db name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.330328464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b594a315-57f2-4688-a9dc-d00ea9ec70cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.330892372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645923330864009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b594a315-57f2-4688-a9dc-d00ea9ec70cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.331761582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cd029dd-b343-4710-9f6c-ebd44c4ff882 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.331833607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cd029dd-b343-4710-9f6c-ebd44c4ff882 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.331881157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9cd029dd-b343-4710-9f6c-ebd44c4ff882 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.372169521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7bed3e9-f9bd-4172-a8fe-178c24edaa20 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.372263761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7bed3e9-f9bd-4172-a8fe-178c24edaa20 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.373581722Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfaf4c5f-a624-46ae-9b3f-a061e0a5fd49 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.374206545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744645923374179422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfaf4c5f-a624-46ae-9b3f-a061e0a5fd49 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.374859567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8758c14-bd4a-4e2d-9739-491a973bf584 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.374939119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8758c14-bd4a-4e2d-9739-491a973bf584 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:52:03 old-k8s-version-529869 crio[627]: time="2025-04-14 15:52:03.375046607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f8758c14-bd4a-4e2d-9739-491a973bf584 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 15:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057458] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.052593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.402276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.030956] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.742898] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.855862] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065964] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065397] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.221422] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.162433] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.282594] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.876323] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.067338] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.988800] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[Apr14 15:35] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 15:39] systemd-fstab-generator[4999]: Ignoring "noauto" option for root device
	[Apr14 15:41] systemd-fstab-generator[5283]: Ignoring "noauto" option for root device
	[  +0.099182] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:52:03 up 17 min,  0 users,  load average: 0.17, 0.10, 0.08
	Linux old-k8s-version-529869 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008616f0)
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000dffef0, 0x4f0ac20, 0xc000c92190, 0x1, 0xc0001000c0)
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d80e0, 0xc0001000c0)
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b606c0, 0xc000dea360)
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6461]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 15:52:02 old-k8s-version-529869 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 14 15:52:02 old-k8s-version-529869 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 14 15:52:02 old-k8s-version-529869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 14 15:52:02 old-k8s-version-529869 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 15:52:02 old-k8s-version-529869 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6482]: I0414 15:52:02.790584    6482 server.go:416] Version: v1.20.0
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6482]: I0414 15:52:02.791097    6482 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6482]: I0414 15:52:02.793885    6482 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6482]: W0414 15:52:02.795384    6482 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 15:52:02 old-k8s-version-529869 kubelet[6482]: I0414 15:52:02.795459    6482 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (235.771343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-529869" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (355.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:52:17.909441 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:52:26.383387 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:52:47.031341 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:53:14.733956 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:53:18.359788 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:53:18.641578 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:53:46.344778 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:53:57.791276 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:54:25.491442 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:54:31.481761 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:54:36.965746 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:54:48.843787 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/auto-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:55:41.593757 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/kindnet-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:56:00.029192 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:56:33.994897 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/calico-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:56:50.204927 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/custom-flannel-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:57:26.383039 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
E0414 15:57:47.031583 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/enable-default-cni-036922/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.117:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.117:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (243.228285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-529869" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-529869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-529869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.111µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-529869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (230.007344ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-529869 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-036922 sudo iptables                       | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo docker                         | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo cat                            | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo                                | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo find                           | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-036922 sudo crio                           | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-036922                                     | bridge-036922 | jenkins | v1.35.0 | 14 Apr 25 15:44 UTC | 14 Apr 25 15:44 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 15:42:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 15:42:20.393428 1908903 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:42:20.393707 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.393717 1908903 out.go:358] Setting ErrFile to fd 2...
	I0414 15:42:20.393721 1908903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:42:20.394014 1908903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:42:20.394737 1908903 out.go:352] Setting JSON to false
	I0414 15:42:20.396002 1908903 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":41084,"bootTime":1744604256,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:42:20.396077 1908903 start.go:139] virtualization: kvm guest
	I0414 15:42:20.398284 1908903 out.go:177] * [bridge-036922] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:42:20.399747 1908903 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:42:20.399774 1908903 notify.go:220] Checking for updates...
	I0414 15:42:20.402506 1908903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:42:20.403700 1908903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:20.404951 1908903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:20.406045 1908903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:42:20.407237 1908903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:42:20.408819 1908903 config.go:182] Loaded profile config "enable-default-cni-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.408920 1908903 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:20.409003 1908903 config.go:182] Loaded profile config "old-k8s-version-529869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0414 15:42:20.409078 1908903 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:42:20.449900 1908903 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:42:20.451424 1908903 start.go:297] selected driver: kvm2
	I0414 15:42:20.451445 1908903 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:42:20.451460 1908903 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:42:20.452406 1908903 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.452490 1908903 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 15:42:20.470925 1908903 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 15:42:20.470988 1908903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 15:42:20.471237 1908903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:20.471280 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:42:20.471289 1908903 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 15:42:20.471347 1908903 start.go:340] cluster config:
	{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:20.471467 1908903 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 15:42:20.473355 1908903 out.go:177] * Starting "bridge-036922" primary control-plane node in "bridge-036922" cluster
	I0414 15:42:18.311367 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:18.311873 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:18.311907 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:18.311830 1907444 retry.go:31] will retry after 1.961785823s: waiting for domain to come up
	I0414 15:42:20.275622 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:20.276217 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:20.276245 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:20.276160 1907444 retry.go:31] will retry after 3.443279587s: waiting for domain to come up
	I0414 15:42:18.552316 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:21.052659 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:20.474918 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:20.474969 1908903 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 15:42:20.474980 1908903 cache.go:56] Caching tarball of preloaded images
	I0414 15:42:20.475087 1908903 preload.go:172] Found /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 15:42:20.475100 1908903 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 15:42:20.475200 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:20.475219 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json: {Name:mk46811239729f3d2abef41cf6cd2fb6300eacaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:20.475365 1908903 start.go:360] acquireMachinesLock for bridge-036922: {Name:mkc86dc13bd021dec2438d67c38653da4675f04d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0414 15:42:23.721372 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:23.721981 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:23.722015 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:23.721948 1907444 retry.go:31] will retry after 3.812874947s: waiting for domain to come up
	I0414 15:42:27.536454 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:27.537033 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find current IP address of domain flannel-036922 in network mk-flannel-036922
	I0414 15:42:27.537056 1907421 main.go:141] libmachine: (flannel-036922) DBG | I0414 15:42:27.537004 1907444 retry.go:31] will retry after 3.540212628s: waiting for domain to come up
	I0414 15:42:23.551530 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:25.552074 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:28.051484 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.627768 1908903 start.go:364] duration metric: took 12.152363514s to acquireMachinesLock for "bridge-036922"
	I0414 15:42:32.627850 1908903 start.go:93] Provisioning new machine with config: &{Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:32.627970 1908903 start.go:125] createHost starting for "" (driver="kvm2")
	I0414 15:42:31.081114 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081620 1907421 main.go:141] libmachine: (flannel-036922) found domain IP: 192.168.72.200
	I0414 15:42:31.081647 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has current primary IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.081654 1907421 main.go:141] libmachine: (flannel-036922) reserving static IP address...
	I0414 15:42:31.082097 1907421 main.go:141] libmachine: (flannel-036922) DBG | unable to find host DHCP lease matching {name: "flannel-036922", mac: "52:54:00:47:a6:f3", ip: "192.168.72.200"} in network mk-flannel-036922
	I0414 15:42:31.169991 1907421 main.go:141] libmachine: (flannel-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:31.170026 1907421 main.go:141] libmachine: (flannel-036922) reserved static IP address 192.168.72.200 for domain flannel-036922
	I0414 15:42:31.170038 1907421 main.go:141] libmachine: (flannel-036922) waiting for SSH...
	I0414 15:42:31.173332 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173746 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.173785 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.173994 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH client type: external
	I0414 15:42:31.174024 1907421 main.go:141] libmachine: (flannel-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa (-rw-------)
	I0414 15:42:31.174056 1907421 main.go:141] libmachine: (flannel-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:31.174071 1907421 main.go:141] libmachine: (flannel-036922) DBG | About to run SSH command:
	I0414 15:42:31.174081 1907421 main.go:141] libmachine: (flannel-036922) DBG | exit 0
	I0414 15:42:31.299043 1907421 main.go:141] libmachine: (flannel-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:31.299375 1907421 main.go:141] libmachine: (flannel-036922) KVM machine creation complete
	I0414 15:42:31.299910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:31.300482 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300707 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:31.300937 1907421 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:31.300956 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:31.302412 1907421 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:31.302427 1907421 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:31.302432 1907421 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:31.302437 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.305226 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305622 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.305653 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.305832 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.306067 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306262 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.306413 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.306582 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.306835 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.306848 1907421 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:31.409981 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.410015 1907421 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:31.410027 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.412803 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413105 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.413155 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.413279 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.413504 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413690 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.413892 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.414073 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.414440 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.414462 1907421 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:31.519809 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:31.519916 1907421 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:31.519927 1907421 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:31.519936 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520223 1907421 buildroot.go:166] provisioning hostname "flannel-036922"
	I0414 15:42:31.520239 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.520436 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.523093 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523484 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.523524 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.523722 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.523907 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524062 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.524183 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.524321 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.524614 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.524632 1907421 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-036922 && echo "flannel-036922" | sudo tee /etc/hostname
	I0414 15:42:31.645537 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-036922
	
	I0414 15:42:31.645576 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.648224 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648558 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.648593 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.648747 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.648942 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649094 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.649255 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.649473 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:31.649681 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:31.649696 1907421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:31.764596 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:31.764638 1907421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:31.764666 1907421 buildroot.go:174] setting up certificates
	I0414 15:42:31.764679 1907421 provision.go:84] configureAuth start
	I0414 15:42:31.764694 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetMachineName
	I0414 15:42:31.765045 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:31.768031 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768340 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.768368 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.768520 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.770840 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771160 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.771189 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.771328 1907421 provision.go:143] copyHostCerts
	I0414 15:42:31.771404 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:31.771416 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:31.771486 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:31.771610 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:31.771619 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:31.771644 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:31.771710 1907421 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:31.771717 1907421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:31.771741 1907421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:31.771791 1907421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.flannel-036922 san=[127.0.0.1 192.168.72.200 flannel-036922 localhost minikube]
	I0414 15:42:31.968023 1907421 provision.go:177] copyRemoteCerts
	I0414 15:42:31.968092 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:31.968117 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:31.970932 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971208 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:31.971239 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:31.971419 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:31.971624 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:31.971760 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:31.971949 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.059121 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:32.086750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0414 15:42:32.113750 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:32.140600 1907421 provision.go:87] duration metric: took 375.905384ms to configureAuth
	I0414 15:42:32.140649 1907421 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:32.140825 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:32.140910 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.143669 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144072 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.144098 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.144301 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.144503 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144664 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.144839 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.145044 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.145348 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.145371 1907421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:32.376226 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:32.376251 1907421 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:32.376267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetURL
	I0414 15:42:32.377737 1907421 main.go:141] libmachine: (flannel-036922) DBG | using libvirt version 6000000
	I0414 15:42:32.380146 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380479 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.380510 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.380661 1907421 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:32.380675 1907421 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:32.380683 1907421 client.go:171] duration metric: took 24.152526095s to LocalClient.Create
	I0414 15:42:32.380708 1907421 start.go:167] duration metric: took 24.152593581s to libmachine.API.Create "flannel-036922"
	I0414 15:42:32.380736 1907421 start.go:293] postStartSetup for "flannel-036922" (driver="kvm2")
	I0414 15:42:32.380753 1907421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:32.380784 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.381034 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:32.381060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.383436 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383744 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.383765 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.383939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.384128 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.384303 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.384449 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.469641 1907421 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:32.474716 1907421 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:32.474754 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:32.474843 1907421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:32.474963 1907421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:32.475080 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:32.485571 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:32.513908 1907421 start.go:296] duration metric: took 133.150087ms for postStartSetup
	I0414 15:42:32.513976 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetConfigRaw
	I0414 15:42:32.514671 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.517434 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.517794 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.517830 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.518116 1907421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/config.json ...
	I0414 15:42:32.518321 1907421 start.go:128] duration metric: took 24.310122388s to createHost
	I0414 15:42:32.518346 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.520587 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.520903 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.520939 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.521138 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.521368 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521508 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.521672 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.521818 1907421 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:32.522073 1907421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.72.200 22 <nil> <nil>}
	I0414 15:42:32.522085 1907421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:32.627543 1907421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645352.607238172
	
	I0414 15:42:32.627581 1907421 fix.go:216] guest clock: 1744645352.607238172
	I0414 15:42:32.627603 1907421 fix.go:229] Guest: 2025-04-14 15:42:32.607238172 +0000 UTC Remote: 2025-04-14 15:42:32.518333951 +0000 UTC m=+24.431599100 (delta=88.904221ms)
	I0414 15:42:32.627642 1907421 fix.go:200] guest clock delta is within tolerance: 88.904221ms
	I0414 15:42:32.627654 1907421 start.go:83] releasing machines lock for "flannel-036922", held for 24.419524725s
	I0414 15:42:32.627691 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.628088 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:32.631249 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.631790 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.631818 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.632042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.632785 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633042 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:32.633151 1907421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:32.633227 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.633252 1907421 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:32.633267 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:32.636525 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636562 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.636948 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.636985 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:32.637010 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637085 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:32.637238 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637465 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637483 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:32.637697 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:32.637723 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.637882 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.637900 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:32.638077 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:32.717463 1907421 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:32.745427 1907421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:32.909851 1907421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:32.916503 1907421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:32.916578 1907421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:32.933971 1907421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:32.933995 1907421 start.go:495] detecting cgroup driver to use...
	I0414 15:42:32.934071 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:32.952308 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:32.970781 1907421 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:32.970865 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:32.987714 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:33.006216 1907421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:30.551892 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:32.552139 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:33.157399 1907421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:33.324202 1907421 docker.go:233] disabling docker service ...
	I0414 15:42:33.324273 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:33.341314 1907421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:33.357080 1907421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:33.549837 1907421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:33.699436 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:33.714710 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:33.738926 1907421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:33.739015 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.751493 1907421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:33.751594 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.764325 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.776597 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.789601 1907421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:33.802342 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.813914 1907421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.837591 1907421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:33.849585 1907421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:33.862417 1907421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:33.862494 1907421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:33.879615 1907421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:33.891734 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:34.014337 1907421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:34.117483 1907421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:34.117570 1907421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:34.123036 1907421 start.go:563] Will wait 60s for crictl version
	I0414 15:42:34.123111 1907421 ssh_runner.go:195] Run: which crictl
	I0414 15:42:34.128066 1907421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:34.173872 1907421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:34.173955 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.210232 1907421 ssh_runner.go:195] Run: crio --version
	I0414 15:42:34.246653 1907421 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:32.631413 1908903 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0414 15:42:32.631616 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:32.631698 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:32.649503 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40397
	I0414 15:42:32.649969 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:32.650582 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:42:32.650606 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:32.651035 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:32.651256 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:32.651415 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:32.651580 1908903 start.go:159] libmachine.API.Create for "bridge-036922" (driver="kvm2")
	I0414 15:42:32.651640 1908903 client.go:168] LocalClient.Create starting
	I0414 15:42:32.651683 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem
	I0414 15:42:32.651736 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651761 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651848 1908903 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem
	I0414 15:42:32.651877 1908903 main.go:141] libmachine: Decoding PEM data...
	I0414 15:42:32.651896 1908903 main.go:141] libmachine: Parsing certificate...
	I0414 15:42:32.651923 1908903 main.go:141] libmachine: Running pre-create checks...
	I0414 15:42:32.651944 1908903 main.go:141] libmachine: (bridge-036922) Calling .PreCreateCheck
	I0414 15:42:32.652284 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:32.652746 1908903 main.go:141] libmachine: Creating machine...
	I0414 15:42:32.652761 1908903 main.go:141] libmachine: (bridge-036922) Calling .Create
	I0414 15:42:32.652923 1908903 main.go:141] libmachine: (bridge-036922) creating KVM machine...
	I0414 15:42:32.652944 1908903 main.go:141] libmachine: (bridge-036922) creating network...
	I0414 15:42:32.654276 1908903 main.go:141] libmachine: (bridge-036922) DBG | found existing default KVM network
	I0414 15:42:32.655546 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.655372 1909012 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:fb:6f} reservation:<nil>}
	I0414 15:42:32.656280 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.656199 1909012 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:dc:27:da} reservation:<nil>}
	I0414 15:42:32.657561 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.657462 1909012 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000292ac0}
	I0414 15:42:32.657591 1908903 main.go:141] libmachine: (bridge-036922) DBG | created network xml: 
	I0414 15:42:32.657603 1908903 main.go:141] libmachine: (bridge-036922) DBG | <network>
	I0414 15:42:32.657610 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <name>mk-bridge-036922</name>
	I0414 15:42:32.657618 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <dns enable='no'/>
	I0414 15:42:32.657625 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657634 1908903 main.go:141] libmachine: (bridge-036922) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0414 15:42:32.657644 1908903 main.go:141] libmachine: (bridge-036922) DBG |     <dhcp>
	I0414 15:42:32.657656 1908903 main.go:141] libmachine: (bridge-036922) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0414 15:42:32.657665 1908903 main.go:141] libmachine: (bridge-036922) DBG |     </dhcp>
	I0414 15:42:32.657673 1908903 main.go:141] libmachine: (bridge-036922) DBG |   </ip>
	I0414 15:42:32.657685 1908903 main.go:141] libmachine: (bridge-036922) DBG |   
	I0414 15:42:32.657692 1908903 main.go:141] libmachine: (bridge-036922) DBG | </network>
	I0414 15:42:32.657700 1908903 main.go:141] libmachine: (bridge-036922) DBG | 
	I0414 15:42:32.663623 1908903 main.go:141] libmachine: (bridge-036922) DBG | trying to create private KVM network mk-bridge-036922 192.168.61.0/24...
	I0414 15:42:32.748953 1908903 main.go:141] libmachine: (bridge-036922) DBG | private KVM network mk-bridge-036922 192.168.61.0/24 created
	I0414 15:42:32.748994 1908903 main.go:141] libmachine: (bridge-036922) setting up store path in /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:32.749036 1908903 main.go:141] libmachine: (bridge-036922) building disk image from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 15:42:32.749186 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:32.748956 1909012 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:32.749224 1908903 main.go:141] libmachine: (bridge-036922) Downloading /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0414 15:42:33.058633 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.058470 1909012 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa...
	I0414 15:42:33.132442 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132298 1909012 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk...
	I0414 15:42:33.132477 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing magic tar header
	I0414 15:42:33.132492 1908903 main.go:141] libmachine: (bridge-036922) DBG | Writing SSH key tar header
	I0414 15:42:33.132503 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.132444 1909012 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 ...
	I0414 15:42:33.132598 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922
	I0414 15:42:33.132618 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines
	I0414 15:42:33.132632 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922 (perms=drwx------)
	I0414 15:42:33.132653 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube/machines (perms=drwxr-xr-x)
	I0414 15:42:33.132668 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971/.minikube (perms=drwxr-xr-x)
	I0414 15:42:33.132681 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration/20512-1845971 (perms=drwxrwxr-x)
	I0414 15:42:33.132691 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0414 15:42:33.132708 1908903 main.go:141] libmachine: (bridge-036922) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0414 15:42:33.132722 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:42:33.132731 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.132765 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20512-1845971
	I0414 15:42:33.132797 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0414 15:42:33.132810 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home/jenkins
	I0414 15:42:33.132825 1908903 main.go:141] libmachine: (bridge-036922) DBG | checking permissions on dir: /home
	I0414 15:42:33.132858 1908903 main.go:141] libmachine: (bridge-036922) DBG | skipping /home - not owner
	I0414 15:42:33.134361 1908903 main.go:141] libmachine: (bridge-036922) define libvirt domain using xml: 
	I0414 15:42:33.134417 1908903 main.go:141] libmachine: (bridge-036922) <domain type='kvm'>
	I0414 15:42:33.134428 1908903 main.go:141] libmachine: (bridge-036922)   <name>bridge-036922</name>
	I0414 15:42:33.134436 1908903 main.go:141] libmachine: (bridge-036922)   <memory unit='MiB'>3072</memory>
	I0414 15:42:33.134447 1908903 main.go:141] libmachine: (bridge-036922)   <vcpu>2</vcpu>
	I0414 15:42:33.134454 1908903 main.go:141] libmachine: (bridge-036922)   <features>
	I0414 15:42:33.134476 1908903 main.go:141] libmachine: (bridge-036922)     <acpi/>
	I0414 15:42:33.134491 1908903 main.go:141] libmachine: (bridge-036922)     <apic/>
	I0414 15:42:33.134498 1908903 main.go:141] libmachine: (bridge-036922)     <pae/>
	I0414 15:42:33.134503 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134515 1908903 main.go:141] libmachine: (bridge-036922)   </features>
	I0414 15:42:33.134526 1908903 main.go:141] libmachine: (bridge-036922)   <cpu mode='host-passthrough'>
	I0414 15:42:33.134533 1908903 main.go:141] libmachine: (bridge-036922)   
	I0414 15:42:33.134542 1908903 main.go:141] libmachine: (bridge-036922)   </cpu>
	I0414 15:42:33.134548 1908903 main.go:141] libmachine: (bridge-036922)   <os>
	I0414 15:42:33.134557 1908903 main.go:141] libmachine: (bridge-036922)     <type>hvm</type>
	I0414 15:42:33.134591 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='cdrom'/>
	I0414 15:42:33.134612 1908903 main.go:141] libmachine: (bridge-036922)     <boot dev='hd'/>
	I0414 15:42:33.134622 1908903 main.go:141] libmachine: (bridge-036922)     <bootmenu enable='no'/>
	I0414 15:42:33.134628 1908903 main.go:141] libmachine: (bridge-036922)   </os>
	I0414 15:42:33.134637 1908903 main.go:141] libmachine: (bridge-036922)   <devices>
	I0414 15:42:33.134649 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='cdrom'>
	I0414 15:42:33.134666 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/boot2docker.iso'/>
	I0414 15:42:33.134677 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hdc' bus='scsi'/>
	I0414 15:42:33.134686 1908903 main.go:141] libmachine: (bridge-036922)       <readonly/>
	I0414 15:42:33.134695 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134704 1908903 main.go:141] libmachine: (bridge-036922)     <disk type='file' device='disk'>
	I0414 15:42:33.134716 1908903 main.go:141] libmachine: (bridge-036922)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0414 15:42:33.134734 1908903 main.go:141] libmachine: (bridge-036922)       <source file='/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/bridge-036922.rawdisk'/>
	I0414 15:42:33.134745 1908903 main.go:141] libmachine: (bridge-036922)       <target dev='hda' bus='virtio'/>
	I0414 15:42:33.134753 1908903 main.go:141] libmachine: (bridge-036922)     </disk>
	I0414 15:42:33.134763 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134772 1908903 main.go:141] libmachine: (bridge-036922)       <source network='mk-bridge-036922'/>
	I0414 15:42:33.134782 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134790 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134798 1908903 main.go:141] libmachine: (bridge-036922)     <interface type='network'>
	I0414 15:42:33.134804 1908903 main.go:141] libmachine: (bridge-036922)       <source network='default'/>
	I0414 15:42:33.134810 1908903 main.go:141] libmachine: (bridge-036922)       <model type='virtio'/>
	I0414 15:42:33.134823 1908903 main.go:141] libmachine: (bridge-036922)     </interface>
	I0414 15:42:33.134831 1908903 main.go:141] libmachine: (bridge-036922)     <serial type='pty'>
	I0414 15:42:33.134841 1908903 main.go:141] libmachine: (bridge-036922)       <target port='0'/>
	I0414 15:42:33.134851 1908903 main.go:141] libmachine: (bridge-036922)     </serial>
	I0414 15:42:33.134860 1908903 main.go:141] libmachine: (bridge-036922)     <console type='pty'>
	I0414 15:42:33.134870 1908903 main.go:141] libmachine: (bridge-036922)       <target type='serial' port='0'/>
	I0414 15:42:33.134878 1908903 main.go:141] libmachine: (bridge-036922)     </console>
	I0414 15:42:33.134887 1908903 main.go:141] libmachine: (bridge-036922)     <rng model='virtio'>
	I0414 15:42:33.134893 1908903 main.go:141] libmachine: (bridge-036922)       <backend model='random'>/dev/random</backend>
	I0414 15:42:33.134901 1908903 main.go:141] libmachine: (bridge-036922)     </rng>
	I0414 15:42:33.134928 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134945 1908903 main.go:141] libmachine: (bridge-036922)     
	I0414 15:42:33.134958 1908903 main.go:141] libmachine: (bridge-036922)   </devices>
	I0414 15:42:33.134967 1908903 main.go:141] libmachine: (bridge-036922) </domain>
	I0414 15:42:33.134981 1908903 main.go:141] libmachine: (bridge-036922) 
	I0414 15:42:33.139633 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:ce:30:4b in network default
	I0414 15:42:33.140227 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.140266 1908903 main.go:141] libmachine: (bridge-036922) starting domain...
	I0414 15:42:33.140279 1908903 main.go:141] libmachine: (bridge-036922) ensuring networks are active...
	I0414 15:42:33.140917 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network default is active
	I0414 15:42:33.141340 1908903 main.go:141] libmachine: (bridge-036922) Ensuring network mk-bridge-036922 is active
	I0414 15:42:33.142027 1908903 main.go:141] libmachine: (bridge-036922) getting domain XML...
	I0414 15:42:33.143089 1908903 main.go:141] libmachine: (bridge-036922) creating domain...
	I0414 15:42:33.536114 1908903 main.go:141] libmachine: (bridge-036922) waiting for IP...
	I0414 15:42:33.536974 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.537437 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.537518 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.537440 1909012 retry.go:31] will retry after 243.753367ms: waiting for domain to come up
	I0414 15:42:33.783413 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:33.784074 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:33.784104 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:33.784044 1909012 retry.go:31] will retry after 339.050332ms: waiting for domain to come up
	I0414 15:42:34.124346 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.124819 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.124847 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.124793 1909012 retry.go:31] will retry after 477.978489ms: waiting for domain to come up
	I0414 15:42:34.604689 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:34.605405 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:34.605478 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:34.605396 1909012 retry.go:31] will retry after 606.717012ms: waiting for domain to come up
	I0414 15:42:35.214566 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.215302 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.215335 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.215304 1909012 retry.go:31] will retry after 585.677483ms: waiting for domain to come up
	I0414 15:42:34.248060 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetIP
	I0414 15:42:34.251061 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251494 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:34.251536 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:34.251790 1907421 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:34.257345 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:34.271269 1907421 kubeadm.go:883] updating cluster {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:34.271419 1907421 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:34.271491 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:34.310047 1907421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:34.310148 1907421 ssh_runner.go:195] Run: which lz4
	I0414 15:42:34.314914 1907421 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:34.319663 1907421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:34.319706 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:36.005122 1907421 crio.go:462] duration metric: took 1.690246926s to copy over tarball
	I0414 15:42:36.005231 1907421 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:42:34.553205 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:37.052635 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:38.486201 1907421 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.480920023s)
	I0414 15:42:38.486301 1907421 crio.go:469] duration metric: took 2.481131687s to extract the tarball
	I0414 15:42:38.486328 1907421 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:42:38.536845 1907421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:38.588854 1907421 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 15:42:38.588889 1907421 cache_images.go:84] Images are preloaded, skipping loading
	I0414 15:42:38.588901 1907421 kubeadm.go:934] updating node { 192.168.72.200 8443 v1.32.2 crio true true} ...
	I0414 15:42:38.589066 1907421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-036922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0414 15:42:38.589161 1907421 ssh_runner.go:195] Run: crio config
	I0414 15:42:38.639561 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:38.639596 1907421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:42:38.639626 1907421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.200 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-036922 NodeName:flannel-036922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:42:38.639887 1907421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-036922"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.200"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.200"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:42:38.640037 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 15:42:38.651901 1907421 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:42:38.651997 1907421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:42:38.662036 1907421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0414 15:42:38.680585 1907421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:42:38.698787 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0414 15:42:38.721640 1907421 ssh_runner.go:195] Run: grep 192.168.72.200	control-plane.minikube.internal$ /etc/hosts
	I0414 15:42:38.726592 1907421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:38.740768 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:38.899231 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:38.918385 1907421 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922 for IP: 192.168.72.200
	I0414 15:42:38.918418 1907421 certs.go:194] generating shared ca certs ...
	I0414 15:42:38.918437 1907421 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:38.918692 1907421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:42:38.918762 1907421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:42:38.918790 1907421 certs.go:256] generating profile certs ...
	I0414 15:42:38.918873 1907421 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key
	I0414 15:42:38.918893 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt with IP's: []
	I0414 15:42:39.040105 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt ...
	I0414 15:42:39.040138 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.crt: {Name:mk2541d497355f75330e1e8d45ca7c05c9151252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040344 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key ...
	I0414 15:42:39.040361 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/client.key: {Name:mk380b7bf852abf1b8988acb006ad6fc4e37f4e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.040469 1907421 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca
	I0414 15:42:39.040487 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.200]
	I0414 15:42:39.250195 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca ...
	I0414 15:42:39.250233 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca: {Name:mkbe9b8905a248872f1e8ad1d846ab894bf1ccb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250430 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca ...
	I0414 15:42:39.250443 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca: {Name:mk00eed7dd27975a2c63b91d58b73bd49c86808b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.250518 1907421 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt
	I0414 15:42:39.250615 1907421 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key.25527bca -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key
	I0414 15:42:39.250679 1907421 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key
	I0414 15:42:39.250697 1907421 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt with IP's: []
	I0414 15:42:39.442422 1907421 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt ...
	I0414 15:42:39.442455 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt: {Name:mka0a36bc874e1164bc79c06b6893dbd73138c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442664 1907421 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key ...
	I0414 15:42:39.442682 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key: {Name:mkee6ef65a530aee53bdaac10b3fb60ee09dbe64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:39.442891 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:42:39.442929 1907421 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:42:39.442940 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:42:39.442967 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:42:39.442990 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:42:39.443010 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:42:39.443051 1907421 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:39.443680 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:42:39.474252 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:42:39.504144 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:42:39.530953 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:42:39.560025 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 15:42:39.592232 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:42:39.640260 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:42:39.670285 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/flannel-036922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:42:39.698670 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:42:39.726986 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:42:39.754399 1907421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:42:39.788251 1907421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:42:39.807950 1907421 ssh_runner.go:195] Run: openssl version
	I0414 15:42:39.814532 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:42:39.827541 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834201 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.834285 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:42:39.841587 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:42:39.853993 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:42:39.879246 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884226 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.884303 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:42:39.890625 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:42:39.903508 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:42:39.915981 1907421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921299 1907421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.921368 1907421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:42:39.927524 1907421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:42:39.939848 1907421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:42:39.945029 1907421 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:42:39.945115 1907421 kubeadm.go:392] StartCluster: {Name:flannel-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:flannel-036922 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:42:39.945228 1907421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:42:39.945336 1907421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:42:39.993625 1907421 cri.go:89] found id: ""
	I0414 15:42:39.993726 1907421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:42:40.007930 1907421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:42:40.022297 1907421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:42:40.033983 1907421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:42:40.034008 1907421 kubeadm.go:157] found existing configuration files:
	
	I0414 15:42:40.034060 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:42:40.044411 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:42:40.044493 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:42:40.057768 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:42:40.068947 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:42:40.069049 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:42:40.080075 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.090907 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:42:40.090972 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:42:40.102034 1907421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:42:40.113045 1907421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:42:40.113105 1907421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:42:40.123704 1907421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:42:40.185411 1907421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 15:42:40.185554 1907421 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:42:40.312075 1907421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:42:40.312258 1907421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:42:40.312435 1907421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 15:42:40.324898 1907421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:42:35.802698 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:35.803793 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:35.803828 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:35.803707 1909012 retry.go:31] will retry after 741.40736ms: waiting for domain to come up
	I0414 15:42:36.546572 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:36.547205 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:36.547270 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:36.547183 1909012 retry.go:31] will retry after 1.039019091s: waiting for domain to come up
	I0414 15:42:37.587454 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:37.588056 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:37.588092 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:37.588030 1909012 retry.go:31] will retry after 1.343543316s: waiting for domain to come up
	I0414 15:42:38.933902 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:38.934408 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:38.934499 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:38.934406 1909012 retry.go:31] will retry after 1.727468698s: waiting for domain to come up
	I0414 15:42:40.461045 1907421 out.go:235]   - Generating certificates and keys ...
	I0414 15:42:40.461189 1907421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:42:40.461295 1907421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:42:40.461411 1907421 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:42:40.576540 1907421 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:42:41.022193 1907421 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:42:41.083437 1907421 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:42:41.196088 1907421 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:42:41.196393 1907421 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.305312 1907421 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:42:41.305484 1907421 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-036922 localhost] and IPs [192.168.72.200 127.0.0.1 ::1]
	I0414 15:42:41.499140 1907421 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:42:41.648257 1907421 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:42:41.792405 1907421 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:42:41.792718 1907421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:42:41.986714 1907421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:42:42.087153 1907421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 15:42:42.240947 1907421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:42:42.386910 1907421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:42:42.522160 1907421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:42:42.523999 1907421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:42:42.528115 1907421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:42:42.574611 1907421 out.go:235]   - Booting up control plane ...
	I0414 15:42:42.574762 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:42:42.574856 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:42:42.574940 1907421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:42:42.575132 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:42:42.575258 1907421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:42:42.575350 1907421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:42:42.720695 1907421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 15:42:42.720861 1907421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 15:42:39.553503 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:41.567599 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:40.664501 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:40.665113 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:40.665156 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:40.665097 1909012 retry.go:31] will retry after 2.255462045s: waiting for domain to come up
	I0414 15:42:42.921827 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:42.922516 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:42.922554 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:42.922480 1909012 retry.go:31] will retry after 2.269647989s: waiting for domain to come up
	I0414 15:42:45.194050 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:45.194621 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:45.194654 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:45.194559 1909012 retry.go:31] will retry after 2.479039637s: waiting for domain to come up
	I0414 15:42:44.113357 1905530 pod_ready.go:103] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"False"
	I0414 15:42:45.058678 1905530 pod_ready.go:93] pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.058714 1905530 pod_ready.go:82] duration metric: took 33.01340484s for pod "coredns-668d6bf9bc-bwv4t" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.058732 1905530 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.061628 1905530 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061664 1905530 pod_ready.go:82] duration metric: took 2.923616ms for pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace to be "Ready" ...
	E0414 15:42:45.061680 1905530 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-ss42g" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-ss42g" not found
	I0414 15:42:45.061691 1905530 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070770 1905530 pod_ready.go:93] pod "etcd-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.070808 1905530 pod_ready.go:82] duration metric: took 9.101557ms for pod "etcd-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.070826 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079164 1905530 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.079198 1905530 pod_ready.go:82] duration metric: took 8.362407ms for pod "kube-apiserver-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.079213 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087476 1905530 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.087505 1905530 pod_ready.go:82] duration metric: took 8.282442ms for pod "kube-controller-manager-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.087518 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249123 1905530 pod_ready.go:93] pod "kube-proxy-cf9hn" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.249155 1905530 pod_ready.go:82] duration metric: took 161.628764ms for pod "kube-proxy-cf9hn" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.249170 1905530 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650160 1905530 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:42:45.650266 1905530 pod_ready.go:82] duration metric: took 401.084136ms for pod "kube-scheduler-enable-default-cni-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:42:45.650296 1905530 pod_ready.go:39] duration metric: took 33.615016594s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:42:45.650331 1905530 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:42:45.650448 1905530 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:42:45.673971 1905530 api_server.go:72] duration metric: took 34.576366052s to wait for apiserver process to appear ...
	I0414 15:42:45.674014 1905530 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:42:45.674039 1905530 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0414 15:42:45.682032 1905530 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0414 15:42:45.683306 1905530 api_server.go:141] control plane version: v1.32.2
	I0414 15:42:45.683334 1905530 api_server.go:131] duration metric: took 9.31155ms to wait for apiserver health ...
	I0414 15:42:45.683345 1905530 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:42:45.851783 1905530 system_pods.go:59] 7 kube-system pods found
	I0414 15:42:45.851838 1905530 system_pods.go:61] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:45.851847 1905530 system_pods.go:61] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:45.851855 1905530 system_pods.go:61] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:45.851861 1905530 system_pods.go:61] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:45.851870 1905530 system_pods.go:61] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:45.851875 1905530 system_pods.go:61] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:45.851883 1905530 system_pods.go:61] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:45.851892 1905530 system_pods.go:74] duration metric: took 168.539138ms to wait for pod list to return data ...
	I0414 15:42:45.851906 1905530 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:42:46.051425 1905530 default_sa.go:45] found service account: "default"
	I0414 15:42:46.051460 1905530 default_sa.go:55] duration metric: took 199.54254ms for default service account to be created ...
	I0414 15:42:46.051473 1905530 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:42:46.251287 1905530 system_pods.go:86] 7 kube-system pods found
	I0414 15:42:46.251414 1905530 system_pods.go:89] "coredns-668d6bf9bc-bwv4t" [790563e2-b22e-4bbe-bbc5-b52f76b839b5] Running
	I0414 15:42:46.251431 1905530 system_pods.go:89] "etcd-enable-default-cni-036922" [527007de-831a-4582-9cbb-baa01fc7f75a] Running
	I0414 15:42:46.251438 1905530 system_pods.go:89] "kube-apiserver-enable-default-cni-036922" [d3500886-ec33-4079-9f8d-efe868d36abe] Running
	I0414 15:42:46.251447 1905530 system_pods.go:89] "kube-controller-manager-enable-default-cni-036922" [109c13d5-06e7-4b5a-af83-2c859621953f] Running
	I0414 15:42:46.251454 1905530 system_pods.go:89] "kube-proxy-cf9hn" [75a57fce-ef6e-43a7-9c2f-57b3a2b02829] Running
	I0414 15:42:46.251459 1905530 system_pods.go:89] "kube-scheduler-enable-default-cni-036922" [d0f475a2-3fcc-44f3-8eb9-e3e2aaebb279] Running
	I0414 15:42:46.251465 1905530 system_pods.go:89] "storage-provisioner" [5b286627-a3ba-4c03-ab91-e9dc6297afd2] Running
	I0414 15:42:46.251476 1905530 system_pods.go:126] duration metric: took 199.99443ms to wait for k8s-apps to be running ...
	I0414 15:42:46.251491 1905530 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:42:46.251557 1905530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:42:46.272907 1905530 system_svc.go:56] duration metric: took 21.403314ms WaitForService to wait for kubelet
	I0414 15:42:46.272947 1905530 kubeadm.go:582] duration metric: took 35.175353213s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:42:46.272975 1905530 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:42:46.449997 1905530 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:42:46.450040 1905530 node_conditions.go:123] node cpu capacity is 2
	I0414 15:42:46.450061 1905530 node_conditions.go:105] duration metric: took 177.079158ms to run NodePressure ...
	I0414 15:42:46.450077 1905530 start.go:241] waiting for startup goroutines ...
	I0414 15:42:46.450088 1905530 start.go:246] waiting for cluster config update ...
	I0414 15:42:46.450103 1905530 start.go:255] writing updated cluster config ...
	I0414 15:42:46.450597 1905530 ssh_runner.go:195] Run: rm -f paused
	I0414 15:42:46.505249 1905530 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:42:46.508181 1905530 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-036922" cluster and "default" namespace by default
	I0414 15:42:43.225629 1907421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.246285ms
	I0414 15:42:43.225795 1907421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 15:42:49.223859 1907421 kubeadm.go:310] [api-check] The API server is healthy after 6.002939425s
	I0414 15:42:49.246703 1907421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 15:42:49.269556 1907421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 15:42:49.315606 1907421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 15:42:49.315885 1907421 kubeadm.go:310] [mark-control-plane] Marking the node flannel-036922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 15:42:49.332520 1907421 kubeadm.go:310] [bootstrap-token] Using token: 6dsy98.vc3wpm9di98p1e2l
	I0414 15:42:47.675403 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:47.675860 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:47.675916 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:47.675831 1909012 retry.go:31] will retry after 3.188398794s: waiting for domain to come up
	I0414 15:42:49.335286 1907421 out.go:235]   - Configuring RBAC rules ...
	I0414 15:42:49.335480 1907421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 15:42:49.342167 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 15:42:49.352554 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 15:42:49.361630 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 15:42:49.366627 1907421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 15:42:49.372335 1907421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 15:42:49.632892 1907421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 15:42:50.092146 1907421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 15:42:50.689823 1907421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 15:42:50.691428 1907421 kubeadm.go:310] 
	I0414 15:42:50.691533 1907421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 15:42:50.691545 1907421 kubeadm.go:310] 
	I0414 15:42:50.691654 1907421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 15:42:50.691666 1907421 kubeadm.go:310] 
	I0414 15:42:50.691717 1907421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 15:42:50.691812 1907421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 15:42:50.691896 1907421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 15:42:50.691906 1907421 kubeadm.go:310] 
	I0414 15:42:50.692009 1907421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 15:42:50.692042 1907421 kubeadm.go:310] 
	I0414 15:42:50.692107 1907421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 15:42:50.692120 1907421 kubeadm.go:310] 
	I0414 15:42:50.692187 1907421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 15:42:50.692272 1907421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 15:42:50.692368 1907421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 15:42:50.692381 1907421 kubeadm.go:310] 
	I0414 15:42:50.692494 1907421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 15:42:50.692586 1907421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 15:42:50.692598 1907421 kubeadm.go:310] 
	I0414 15:42:50.692692 1907421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.692847 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 15:42:50.692890 1907421 kubeadm.go:310] 	--control-plane 
	I0414 15:42:50.692903 1907421 kubeadm.go:310] 
	I0414 15:42:50.693022 1907421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 15:42:50.693031 1907421 kubeadm.go:310] 
	I0414 15:42:50.693144 1907421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6dsy98.vc3wpm9di98p1e2l \
	I0414 15:42:50.693291 1907421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 15:42:50.693806 1907421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:50.694067 1907421 cni.go:84] Creating CNI manager for "flannel"
	I0414 15:42:50.696952 1907421 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0414 15:42:50.698346 1907421 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 15:42:50.706416 1907421 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 15:42:50.706438 1907421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0414 15:42:50.727656 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 15:42:51.287720 1907421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:42:51.287835 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.287871 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-036922 minikube.k8s.io/updated_at=2025_04_14T15_42_51_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=flannel-036922 minikube.k8s.io/primary=true
	I0414 15:42:51.430599 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:51.430598 1907421 ops.go:34] apiserver oom_adj: -16
	I0414 15:42:51.930825 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.430933 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:52.931267 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.431500 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:53.931720 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.431756 1907421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:42:54.561136 1907421 kubeadm.go:1113] duration metric: took 3.273384012s to wait for elevateKubeSystemPrivileges
	I0414 15:42:54.561187 1907421 kubeadm.go:394] duration metric: took 14.616077815s to StartCluster
	I0414 15:42:54.561215 1907421 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.561317 1907421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:42:54.562809 1907421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:42:54.563052 1907421 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.200 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:42:54.563065 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 15:42:54.563117 1907421 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:42:54.563242 1907421 addons.go:69] Setting storage-provisioner=true in profile "flannel-036922"
	I0414 15:42:54.563265 1907421 addons.go:238] Setting addon storage-provisioner=true in "flannel-036922"
	I0414 15:42:54.563273 1907421 addons.go:69] Setting default-storageclass=true in profile "flannel-036922"
	I0414 15:42:54.563300 1907421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-036922"
	I0414 15:42:54.563305 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.563335 1907421 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:54.563788 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563838 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.563865 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.563907 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.566159 1907421 out.go:177] * Verifying Kubernetes components...
	I0414 15:42:54.567701 1907421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:54.582661 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0414 15:42:54.583246 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.583768 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.583805 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.584263 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.584496 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.585593 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0414 15:42:54.586151 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.586695 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.586721 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.587156 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.587767 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.587823 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.588816 1907421 addons.go:238] Setting addon default-storageclass=true in "flannel-036922"
	I0414 15:42:54.588862 1907421 host.go:66] Checking if "flannel-036922" exists ...
	I0414 15:42:54.589169 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.589217 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.605944 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0414 15:42:54.605986 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0414 15:42:54.606442 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.606714 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.607143 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607160 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607282 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.607308 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.607611 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607729 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.607824 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.608193 1907421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:42:54.608234 1907421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:42:54.610044 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.612210 1907421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:42:50.867819 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:50.868522 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find current IP address of domain bridge-036922 in network mk-bridge-036922
	I0414 15:42:50.868555 1908903 main.go:141] libmachine: (bridge-036922) DBG | I0414 15:42:50.868467 1909012 retry.go:31] will retry after 3.520845781s: waiting for domain to come up
	I0414 15:42:54.391586 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392265 1908903 main.go:141] libmachine: (bridge-036922) found domain IP: 192.168.61.165
	I0414 15:42:54.392301 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has current primary IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.392309 1908903 main.go:141] libmachine: (bridge-036922) reserving static IP address...
	I0414 15:42:54.392694 1908903 main.go:141] libmachine: (bridge-036922) DBG | unable to find host DHCP lease matching {name: "bridge-036922", mac: "52:54:00:d8:e5:52", ip: "192.168.61.165"} in network mk-bridge-036922
	I0414 15:42:54.493139 1908903 main.go:141] libmachine: (bridge-036922) DBG | Getting to WaitForSSH function...
	I0414 15:42:54.493176 1908903 main.go:141] libmachine: (bridge-036922) reserved static IP address 192.168.61.165 for domain bridge-036922
	I0414 15:42:54.493184 1908903 main.go:141] libmachine: (bridge-036922) waiting for SSH...
	I0414 15:42:54.496732 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497256 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.497289 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.497438 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH client type: external
	I0414 15:42:54.497470 1908903 main.go:141] libmachine: (bridge-036922) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa (-rw-------)
	I0414 15:42:54.497515 1908903 main.go:141] libmachine: (bridge-036922) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0414 15:42:54.497529 1908903 main.go:141] libmachine: (bridge-036922) DBG | About to run SSH command:
	I0414 15:42:54.497542 1908903 main.go:141] libmachine: (bridge-036922) DBG | exit 0
	I0414 15:42:54.628504 1908903 main.go:141] libmachine: (bridge-036922) DBG | SSH cmd err, output: <nil>: 
	I0414 15:42:54.628809 1908903 main.go:141] libmachine: (bridge-036922) KVM machine creation complete
	I0414 15:42:54.629054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:54.629681 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630072 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:54.630332 1908903 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0414 15:42:54.630347 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:42:54.632867 1908903 main.go:141] libmachine: Detecting operating system of created instance...
	I0414 15:42:54.632882 1908903 main.go:141] libmachine: Waiting for SSH to be available...
	I0414 15:42:54.632889 1908903 main.go:141] libmachine: Getting to WaitForSSH function...
	I0414 15:42:54.632896 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.637477 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638308 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.638311 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.638423 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.638557 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638771 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638949 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.639184 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.639458 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.639474 1908903 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0414 15:42:54.750695 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:54.750726 1908903 main.go:141] libmachine: Detecting the provisioner...
	I0414 15:42:54.750740 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.754154 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.754756 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.754859 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.755083 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.755305 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755456 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.755636 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.755854 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.756066 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.756078 1908903 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0414 15:42:54.871796 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0414 15:42:54.871901 1908903 main.go:141] libmachine: found compatible host: buildroot
	I0414 15:42:54.871917 1908903 main.go:141] libmachine: Provisioning with buildroot...
	I0414 15:42:54.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872246 1908903 buildroot.go:166] provisioning hostname "bridge-036922"
	I0414 15:42:54.872272 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:54.872483 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:54.875743 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876125 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:54.876156 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:54.876386 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:54.876633 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876832 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.876998 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:54.877181 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:54.877502 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:54.877523 1908903 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-036922 && echo "bridge-036922" | sudo tee /etc/hostname
	I0414 15:42:55.000057 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-036922
	
	I0414 15:42:55.000093 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.003879 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004436 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.004467 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.004819 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.005054 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005254 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.005507 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.005701 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.005995 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.006031 1908903 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-036922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-036922/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-036922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 15:42:55.128677 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 15:42:55.128716 1908903 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-1845971/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-1845971/.minikube}
	I0414 15:42:55.128743 1908903 buildroot.go:174] setting up certificates
	I0414 15:42:55.128772 1908903 provision.go:84] configureAuth start
	I0414 15:42:55.128791 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetMachineName
	I0414 15:42:55.129195 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.132674 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133237 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.133295 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.133459 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.137559 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138052 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.138085 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.138322 1908903 provision.go:143] copyHostCerts
	I0414 15:42:55.138401 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem, removing ...
	I0414 15:42:55.138427 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem
	I0414 15:42:55.138499 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/key.pem (1679 bytes)
	I0414 15:42:55.138639 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem, removing ...
	I0414 15:42:55.138652 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem
	I0414 15:42:55.138695 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.pem (1082 bytes)
	I0414 15:42:55.138851 1908903 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem, removing ...
	I0414 15:42:55.138863 1908903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem
	I0414 15:42:55.138888 1908903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-1845971/.minikube/cert.pem (1123 bytes)
	I0414 15:42:55.139002 1908903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem org=jenkins.bridge-036922 san=[127.0.0.1 192.168.61.165 bridge-036922 localhost minikube]
	I0414 15:42:55.169326 1908903 provision.go:177] copyRemoteCerts
	I0414 15:42:55.169402 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 15:42:55.169429 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.172809 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173239 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.173270 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.173706 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.174030 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.174255 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.174485 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.261123 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 15:42:55.288685 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 15:42:55.316648 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0414 15:42:55.346718 1908903 provision.go:87] duration metric: took 217.897994ms to configureAuth
	I0414 15:42:55.346759 1908903 buildroot.go:189] setting minikube options for container-runtime
	I0414 15:42:55.347050 1908903 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:42:55.347158 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.350409 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.350855 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.350888 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.351139 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.351328 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351559 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.351722 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.351895 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.352172 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.352196 1908903 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 15:42:54.613578 1907421 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:54.613601 1907421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:42:54.613625 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.617705 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618134 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.618154 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.618488 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.618717 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.618939 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.619103 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.627890 1907421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0414 15:42:54.628364 1907421 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:42:54.628827 1907421 main.go:141] libmachine: Using API Version  1
	I0414 15:42:54.628849 1907421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:42:54.629832 1907421 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:42:54.630200 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetState
	I0414 15:42:54.632595 1907421 main.go:141] libmachine: (flannel-036922) Calling .DriverName
	I0414 15:42:54.633055 1907421 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:54.633074 1907421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:42:54.633096 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHHostname
	I0414 15:42:54.637402 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.637882 1907421 main.go:141] libmachine: (flannel-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a6:f3", ip: ""} in network mk-flannel-036922: {Iface:virbr4 ExpiryTime:2025-04-14 16:42:24 +0000 UTC Type:0 Mac:52:54:00:47:a6:f3 Iaid: IPaddr:192.168.72.200 Prefix:24 Hostname:flannel-036922 Clientid:01:52:54:00:47:a6:f3}
	I0414 15:42:54.637912 1907421 main.go:141] libmachine: (flannel-036922) DBG | domain flannel-036922 has defined IP address 192.168.72.200 and MAC address 52:54:00:47:a6:f3 in network mk-flannel-036922
	I0414 15:42:54.638627 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHPort
	I0414 15:42:54.638825 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHKeyPath
	I0414 15:42:54.638994 1907421 main.go:141] libmachine: (flannel-036922) Calling .GetSSHUsername
	I0414 15:42:54.639153 1907421 sshutil.go:53] new ssh client: &{IP:192.168.72.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/flannel-036922/id_rsa Username:docker}
	I0414 15:42:54.824401 1907421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:42:54.824485 1907421 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 15:42:54.848222 1907421 node_ready.go:35] waiting up to 15m0s for node "flannel-036922" to be "Ready" ...
	I0414 15:42:55.016349 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:42:55.024812 1907421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:42:55.334347 1907421 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0414 15:42:55.469300 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469338 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.469832 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.469875 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.469885 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.469894 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.469915 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.470211 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.470226 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.470243 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494538 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.494593 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.494941 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.494960 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.494989 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.843405 1907421 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-036922" context rescaled to 1 replicas
	I0414 15:42:55.852113 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852145 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852433 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852455 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852467 1907421 main.go:141] libmachine: Making call to close driver server
	I0414 15:42:55.852475 1907421 main.go:141] libmachine: (flannel-036922) Calling .Close
	I0414 15:42:55.852855 1907421 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:42:55.852876 1907421 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:42:55.852900 1907421 main.go:141] libmachine: (flannel-036922) DBG | Closing plugin on server side
	I0414 15:42:55.855070 1907421 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 15:42:55.609672 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 15:42:55.609708 1908903 main.go:141] libmachine: Checking connection to Docker...
	I0414 15:42:55.609720 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetURL
	I0414 15:42:55.611018 1908903 main.go:141] libmachine: (bridge-036922) DBG | using libvirt version 6000000
	I0414 15:42:55.613407 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.613780 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.613807 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.614012 1908903 main.go:141] libmachine: Docker is up and running!
	I0414 15:42:55.614034 1908903 main.go:141] libmachine: Reticulating splines...
	I0414 15:42:55.614045 1908903 client.go:171] duration metric: took 22.962392414s to LocalClient.Create
	I0414 15:42:55.614118 1908903 start.go:167] duration metric: took 22.96254203s to libmachine.API.Create "bridge-036922"
	I0414 15:42:55.614140 1908903 start.go:293] postStartSetup for "bridge-036922" (driver="kvm2")
	I0414 15:42:55.614154 1908903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 15:42:55.614196 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.614557 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 15:42:55.614591 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.617351 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617730 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.617783 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.617881 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.618095 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.618279 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.618457 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.706758 1908903 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 15:42:55.711737 1908903 info.go:137] Remote host: Buildroot 2023.02.9
	I0414 15:42:55.711775 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/addons for local assets ...
	I0414 15:42:55.711864 1908903 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-1845971/.minikube/files for local assets ...
	I0414 15:42:55.711967 1908903 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem -> 18532702.pem in /etc/ssl/certs
	I0414 15:42:55.712104 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0414 15:42:55.724874 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:42:55.754120 1908903 start.go:296] duration metric: took 139.933679ms for postStartSetup
	I0414 15:42:55.754193 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetConfigRaw
	I0414 15:42:55.754932 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.757984 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758267 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.758297 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.758631 1908903 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/config.json ...
	I0414 15:42:55.758849 1908903 start.go:128] duration metric: took 23.13086309s to createHost
	I0414 15:42:55.758880 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.761734 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762225 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.762256 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.762495 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.762688 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.762944 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.763100 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.763340 1908903 main.go:141] libmachine: Using SSH client type: native
	I0414 15:42:55.763660 1908903 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 192.168.61.165 22 <nil> <nil>}
	I0414 15:42:55.763680 1908903 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0414 15:42:55.871836 1908903 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744645375.840729325
	
	I0414 15:42:55.871865 1908903 fix.go:216] guest clock: 1744645375.840729325
	I0414 15:42:55.871875 1908903 fix.go:229] Guest: 2025-04-14 15:42:55.840729325 +0000 UTC Remote: 2025-04-14 15:42:55.758864102 +0000 UTC m=+35.409485075 (delta=81.865223ms)
	I0414 15:42:55.871904 1908903 fix.go:200] guest clock delta is within tolerance: 81.865223ms
	I0414 15:42:55.871910 1908903 start.go:83] releasing machines lock for "bridge-036922", held for 23.244108969s
	I0414 15:42:55.871935 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.872246 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:55.875616 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876069 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.876099 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.876330 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.876969 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877174 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:42:55.877292 1908903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 15:42:55.877339 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.877479 1908903 ssh_runner.go:195] Run: cat /version.json
	I0414 15:42:55.877515 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:42:55.880495 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880821 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.880916 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.880943 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881164 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881301 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:55.881322 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:55.881353 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881480 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.881545 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:42:55.881643 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.881712 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:42:55.881911 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:42:55.882048 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:42:55.986199 1908903 ssh_runner.go:195] Run: systemctl --version
	I0414 15:42:55.993392 1908903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 15:42:56.164978 1908903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0414 15:42:56.172178 1908903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0414 15:42:56.172282 1908903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 15:42:56.197933 1908903 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0414 15:42:56.197965 1908903 start.go:495] detecting cgroup driver to use...
	I0414 15:42:56.198045 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 15:42:56.220424 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 15:42:56.238850 1908903 docker.go:217] disabling cri-docker service (if available) ...
	I0414 15:42:56.238925 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 15:42:56.258562 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 15:42:56.281276 1908903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 15:42:56.446192 1908903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 15:42:56.624912 1908903 docker.go:233] disabling docker service ...
	I0414 15:42:56.624983 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 15:42:56.646632 1908903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 15:42:56.661759 1908903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 15:42:56.821178 1908903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 15:42:56.960834 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 15:42:56.976444 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 15:42:57.000020 1908903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 15:42:57.000107 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.012798 1908903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 15:42:57.012878 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.024940 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.037307 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.049273 1908903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 15:42:57.061679 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.073870 1908903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.092514 1908903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 15:42:57.104956 1908903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 15:42:57.115727 1908903 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 15:42:57.115813 1908903 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 15:42:57.133078 1908903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 15:42:57.144441 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:42:57.281237 1908903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 15:42:57.385608 1908903 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 15:42:57.385708 1908903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 15:42:57.391600 1908903 start.go:563] Will wait 60s for crictl version
	I0414 15:42:57.391684 1908903 ssh_runner.go:195] Run: which crictl
	I0414 15:42:57.396066 1908903 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 15:42:57.436559 1908903 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0414 15:42:57.436662 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.466242 1908903 ssh_runner.go:195] Run: crio --version
	I0414 15:42:57.506266 1908903 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
	I0414 15:42:55.856560 1907421 addons.go:514] duration metric: took 1.293454428s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 15:42:56.852426 1907421 node_ready.go:53] node "flannel-036922" has status "Ready":"False"
	I0414 15:42:59.215921 1898413 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0414 15:42:59.216197 1898413 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0414 15:42:59.216228 1898413 kubeadm.go:310] 
	I0414 15:42:59.216283 1898413 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0414 15:42:59.216336 1898413 kubeadm.go:310] 		timed out waiting for the condition
	I0414 15:42:59.216342 1898413 kubeadm.go:310] 
	I0414 15:42:59.216389 1898413 kubeadm.go:310] 	This error is likely caused by:
	I0414 15:42:59.216433 1898413 kubeadm.go:310] 		- The kubelet is not running
	I0414 15:42:59.216581 1898413 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0414 15:42:59.216592 1898413 kubeadm.go:310] 
	I0414 15:42:59.216725 1898413 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0414 15:42:59.216770 1898413 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0414 15:42:59.216818 1898413 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0414 15:42:59.216822 1898413 kubeadm.go:310] 
	I0414 15:42:59.216907 1898413 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0414 15:42:59.217006 1898413 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0414 15:42:59.217015 1898413 kubeadm.go:310] 
	I0414 15:42:59.217187 1898413 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0414 15:42:59.217303 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0414 15:42:59.217409 1898413 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0414 15:42:59.217503 1898413 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0414 15:42:59.217511 1898413 kubeadm.go:310] 
	I0414 15:42:59.219259 1898413 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:42:59.219407 1898413 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0414 15:42:59.219514 1898413 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0414 15:42:59.220159 1898413 kubeadm.go:394] duration metric: took 8m0.569569368s to StartCluster
	I0414 15:42:59.220230 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 15:42:59.220304 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 15:42:59.296348 1898413 cri.go:89] found id: ""
	I0414 15:42:59.296381 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.296393 1898413 logs.go:284] No container was found matching "kube-apiserver"
	I0414 15:42:59.296403 1898413 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 15:42:59.296511 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 15:42:59.357668 1898413 cri.go:89] found id: ""
	I0414 15:42:59.357701 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.357713 1898413 logs.go:284] No container was found matching "etcd"
	I0414 15:42:59.357720 1898413 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 15:42:59.357797 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 15:42:59.408582 1898413 cri.go:89] found id: ""
	I0414 15:42:59.408613 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.408621 1898413 logs.go:284] No container was found matching "coredns"
	I0414 15:42:59.408627 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 15:42:59.408702 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 15:42:59.457402 1898413 cri.go:89] found id: ""
	I0414 15:42:59.457438 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.457449 1898413 logs.go:284] No container was found matching "kube-scheduler"
	I0414 15:42:59.457457 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 15:42:59.457530 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 15:42:59.508543 1898413 cri.go:89] found id: ""
	I0414 15:42:59.508601 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.508613 1898413 logs.go:284] No container was found matching "kube-proxy"
	I0414 15:42:59.508621 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 15:42:59.508691 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 15:42:59.557213 1898413 cri.go:89] found id: ""
	I0414 15:42:59.557250 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.557262 1898413 logs.go:284] No container was found matching "kube-controller-manager"
	I0414 15:42:59.557270 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 15:42:59.557343 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 15:42:59.607994 1898413 cri.go:89] found id: ""
	I0414 15:42:59.608023 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.608048 1898413 logs.go:284] No container was found matching "kindnet"
	I0414 15:42:59.608057 1898413 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0414 15:42:59.608129 1898413 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0414 15:42:59.657459 1898413 cri.go:89] found id: ""
	I0414 15:42:59.657494 1898413 logs.go:282] 0 containers: []
	W0414 15:42:59.657507 1898413 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0414 15:42:59.657525 1898413 logs.go:123] Gathering logs for kubelet ...
	I0414 15:42:59.657549 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 15:42:59.723160 1898413 logs.go:123] Gathering logs for dmesg ...
	I0414 15:42:59.723223 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 15:42:59.743367 1898413 logs.go:123] Gathering logs for describe nodes ...
	I0414 15:42:59.743418 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0414 15:42:59.876644 1898413 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0414 15:42:59.876695 1898413 logs.go:123] Gathering logs for CRI-O ...
	I0414 15:42:59.876713 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 15:43:00.032948 1898413 logs.go:123] Gathering logs for container status ...
	I0414 15:43:00.032994 1898413 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0414 15:43:00.086613 1898413 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0414 15:43:00.086686 1898413 out.go:270] * 
	W0414 15:43:00.086782 1898413 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.086809 1898413 out.go:270] * 
	W0414 15:43:00.087917 1898413 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0414 15:43:00.091413 1898413 out.go:201] 
	W0414 15:43:00.092767 1898413 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0414 15:43:00.092825 1898413 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0414 15:43:00.092861 1898413 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0414 15:43:00.094446 1898413 out.go:201] 
	I0414 15:42:57.507650 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetIP
	I0414 15:42:57.510669 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511148 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:42:57.511176 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:42:57.511409 1908903 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0414 15:42:57.516092 1908903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:42:57.529590 1908903 kubeadm.go:883] updating cluster {Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 15:42:57.529766 1908903 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 15:42:57.529845 1908903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:42:57.572139 1908903 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0414 15:42:57.572227 1908903 ssh_runner.go:195] Run: which lz4
	I0414 15:42:57.576627 1908903 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0414 15:42:57.581291 1908903 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0414 15:42:57.581343 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
	I0414 15:42:59.289654 1908903 crio.go:462] duration metric: took 1.713065895s to copy over tarball
	I0414 15:42:59.289872 1908903 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0414 15:42:59.351809 1907421 node_ready.go:53] node "flannel-036922" has status "Ready":"False"
	I0414 15:43:00.852154 1907421 node_ready.go:49] node "flannel-036922" has status "Ready":"True"
	I0414 15:43:00.852190 1907421 node_ready.go:38] duration metric: took 6.003920766s for node "flannel-036922" to be "Ready" ...
	I0414 15:43:00.852202 1907421 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:00.855688 1907421 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:03.053356 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:02.349077 1908903 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.059144774s)
	I0414 15:43:02.349125 1908903 crio.go:469] duration metric: took 3.05935727s to extract the tarball
	I0414 15:43:02.349133 1908903 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0414 15:43:02.391460 1908903 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 15:43:02.441459 1908903 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 15:43:02.441495 1908903 cache_images.go:84] Images are preloaded, skipping loading
	I0414 15:43:02.441507 1908903 kubeadm.go:934] updating node { 192.168.61.165 8443 v1.32.2 crio true true} ...
	I0414 15:43:02.441660 1908903 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-036922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0414 15:43:02.441763 1908903 ssh_runner.go:195] Run: crio config
	I0414 15:43:02.502883 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:43:02.502919 1908903 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 15:43:02.502962 1908903 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.165 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-036922 NodeName:bridge-036922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 15:43:02.503126 1908903 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-036922"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.165"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.165"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 15:43:02.503207 1908903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 15:43:02.516388 1908903 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 15:43:02.516457 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 15:43:02.527106 1908903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0414 15:43:02.545740 1908903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 15:43:02.564255 1908903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0414 15:43:02.582628 1908903 ssh_runner.go:195] Run: grep 192.168.61.165	control-plane.minikube.internal$ /etc/hosts
	I0414 15:43:02.587032 1908903 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 15:43:02.601453 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:43:02.733859 1908903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:43:02.752631 1908903 certs.go:68] Setting up /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922 for IP: 192.168.61.165
	I0414 15:43:02.752661 1908903 certs.go:194] generating shared ca certs ...
	I0414 15:43:02.752689 1908903 certs.go:226] acquiring lock for ca certs: {Name:mk01199c86d4c9dbb6d756d9ad313fb9f19edafc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:02.752885 1908903 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key
	I0414 15:43:02.752950 1908903 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key
	I0414 15:43:02.752967 1908903 certs.go:256] generating profile certs ...
	I0414 15:43:02.753043 1908903 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key
	I0414 15:43:02.753060 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt with IP's: []
	I0414 15:43:03.058289 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt ...
	I0414 15:43:03.058339 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.crt: {Name:mk7351040ba2e8c3a4ca5b96eb26d95a2d5977ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.058574 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key ...
	I0414 15:43:03.058591 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/client.key: {Name:mkd34c01b2eee2dc3fc1717df5b3dc46ce680363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.058702 1908903 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893
	I0414 15:43:03.058718 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.165]
	I0414 15:43:03.689440 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 ...
	I0414 15:43:03.689480 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893: {Name:mkd5b14756191631834da95f41b38a940cf31349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.689692 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893 ...
	I0414 15:43:03.689717 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893: {Name:mkf6fff86e315dd01269aced9364162e3eff934a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.689822 1908903 certs.go:381] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt.df689893 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt
	I0414 15:43:03.689918 1908903 certs.go:385] copying /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key.df689893 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key
	I0414 15:43:03.689995 1908903 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key
	I0414 15:43:03.690014 1908903 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt with IP's: []
	I0414 15:43:03.794322 1908903 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt ...
	I0414 15:43:03.794351 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt: {Name:mk8f147274fd78d695cbf09159a830835e63cf56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.794521 1908903 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key ...
	I0414 15:43:03.794536 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key: {Name:mk730e2e16f2bcbe4155bbe3689536f15e6442c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:03.794712 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem (1338 bytes)
	W0414 15:43:03.794750 1908903 certs.go:480] ignoring /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270_empty.pem, impossibly tiny 0 bytes
	I0414 15:43:03.794776 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca-key.pem (1675 bytes)
	I0414 15:43:03.794812 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/ca.pem (1082 bytes)
	I0414 15:43:03.794838 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/cert.pem (1123 bytes)
	I0414 15:43:03.794859 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/key.pem (1679 bytes)
	I0414 15:43:03.794898 1908903 certs.go:484] found cert: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem (1708 bytes)
	I0414 15:43:03.795467 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 15:43:03.825320 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 15:43:03.854532 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 15:43:03.887606 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 15:43:03.921213 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 15:43:03.950111 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 15:43:03.981773 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 15:43:04.011109 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/bridge-036922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0414 15:43:04.039409 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/ssl/certs/18532702.pem --> /usr/share/ca-certificates/18532702.pem (1708 bytes)
	I0414 15:43:04.064525 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 15:43:04.092407 1908903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-1845971/.minikube/certs/1853270.pem --> /usr/share/ca-certificates/1853270.pem (1338 bytes)
	I0414 15:43:04.118044 1908903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 15:43:04.139488 1908903 ssh_runner.go:195] Run: openssl version
	I0414 15:43:04.146985 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 15:43:04.159472 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.164659 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 14:17 /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.164739 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 15:43:04.171807 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 15:43:04.188160 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1853270.pem && ln -fs /usr/share/ca-certificates/1853270.pem /etc/ssl/certs/1853270.pem"
	I0414 15:43:04.205038 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.211611 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 14:25 /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.211681 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1853270.pem
	I0414 15:43:04.218747 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1853270.pem /etc/ssl/certs/51391683.0"
	I0414 15:43:04.236820 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18532702.pem && ln -fs /usr/share/ca-certificates/18532702.pem /etc/ssl/certs/18532702.pem"
	I0414 15:43:04.263022 1908903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.273910 1908903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 14:25 /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.273985 1908903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18532702.pem
	I0414 15:43:04.287756 1908903 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18532702.pem /etc/ssl/certs/3ec20f2e.0"
	I0414 15:43:04.306774 1908903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 15:43:04.311741 1908903 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 15:43:04.311807 1908903 kubeadm.go:392] StartCluster: {Name:bridge-036922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:bridge-036922 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 15:43:04.311903 1908903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 15:43:04.311973 1908903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 15:43:04.357659 1908903 cri.go:89] found id: ""
	I0414 15:43:04.357758 1908903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 15:43:04.372556 1908903 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 15:43:04.384120 1908903 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 15:43:04.396721 1908903 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 15:43:04.396749 1908903 kubeadm.go:157] found existing configuration files:
	
	I0414 15:43:04.396811 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 15:43:04.407452 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 15:43:04.407542 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 15:43:04.418929 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 15:43:04.430627 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 15:43:04.430717 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 15:43:04.442139 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 15:43:04.453146 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 15:43:04.453222 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 15:43:04.464848 1908903 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 15:43:04.479067 1908903 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 15:43:04.479145 1908903 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 15:43:04.491976 1908903 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0414 15:43:04.553900 1908903 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 15:43:04.554017 1908903 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 15:43:04.672215 1908903 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 15:43:04.672355 1908903 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 15:43:04.672497 1908903 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 15:43:04.688408 1908903 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 15:43:04.769682 1908903 out.go:235]   - Generating certificates and keys ...
	I0414 15:43:04.769796 1908903 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 15:43:04.769875 1908903 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 15:43:04.820739 1908903 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 15:43:05.150004 1908903 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 15:43:05.206561 1908903 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 15:43:05.428733 1908903 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 15:43:05.776550 1908903 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 15:43:05.776721 1908903 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-036922 localhost] and IPs [192.168.61.165 127.0.0.1 ::1]
	I0414 15:43:06.204857 1908903 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 15:43:06.205015 1908903 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-036922 localhost] and IPs [192.168.61.165 127.0.0.1 ::1]
	I0414 15:43:06.375999 1908903 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 15:43:06.499159 1908903 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 15:43:06.580941 1908903 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 15:43:06.581186 1908903 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 15:43:06.679071 1908903 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 15:43:06.835883 1908903 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 15:43:06.969239 1908903 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 15:43:07.047193 1908903 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 15:43:07.515283 1908903 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 15:43:07.517979 1908903 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 15:43:07.520948 1908903 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 15:43:05.362702 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:07.363565 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:07.522722 1908903 out.go:235]   - Booting up control plane ...
	I0414 15:43:07.522853 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 15:43:07.522975 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 15:43:07.523079 1908903 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 15:43:07.540604 1908903 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 15:43:07.548217 1908903 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 15:43:07.548314 1908903 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 15:43:07.720338 1908903 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 15:43:07.720487 1908903 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 15:43:08.221640 1908903 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690509ms
	I0414 15:43:08.221744 1908903 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 15:43:09.363642 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:11.862669 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:13.722997 1908903 kubeadm.go:310] [api-check] The API server is healthy after 5.502696369s
	I0414 15:43:13.742223 1908903 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 15:43:13.757863 1908903 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 15:43:13.795334 1908903 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 15:43:13.795643 1908903 kubeadm.go:310] [mark-control-plane] Marking the node bridge-036922 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 15:43:13.812020 1908903 kubeadm.go:310] [bootstrap-token] Using token: c2gb67.laeaummb5gd4egy3
	I0414 15:43:13.813488 1908903 out.go:235]   - Configuring RBAC rules ...
	I0414 15:43:13.813629 1908903 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 15:43:13.828250 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 15:43:13.852437 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 15:43:13.858931 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 15:43:13.863903 1908903 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 15:43:13.871061 1908903 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 15:43:14.132072 1908903 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 15:43:14.585461 1908903 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 15:43:15.130437 1908903 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 15:43:15.131975 1908903 kubeadm.go:310] 
	I0414 15:43:15.132090 1908903 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 15:43:15.132109 1908903 kubeadm.go:310] 
	I0414 15:43:15.132246 1908903 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 15:43:15.132265 1908903 kubeadm.go:310] 
	I0414 15:43:15.132300 1908903 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 15:43:15.132378 1908903 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 15:43:15.132458 1908903 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 15:43:15.132468 1908903 kubeadm.go:310] 
	I0414 15:43:15.132540 1908903 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 15:43:15.132550 1908903 kubeadm.go:310] 
	I0414 15:43:15.132636 1908903 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 15:43:15.132650 1908903 kubeadm.go:310] 
	I0414 15:43:15.132726 1908903 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 15:43:15.132825 1908903 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 15:43:15.132918 1908903 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 15:43:15.132927 1908903 kubeadm.go:310] 
	I0414 15:43:15.133043 1908903 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 15:43:15.133158 1908903 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 15:43:15.133178 1908903 kubeadm.go:310] 
	I0414 15:43:15.133292 1908903 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c2gb67.laeaummb5gd4egy3 \
	I0414 15:43:15.133428 1908903 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f \
	I0414 15:43:15.133451 1908903 kubeadm.go:310] 	--control-plane 
	I0414 15:43:15.133455 1908903 kubeadm.go:310] 
	I0414 15:43:15.133587 1908903 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 15:43:15.133599 1908903 kubeadm.go:310] 
	I0414 15:43:15.133694 1908903 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c2gb67.laeaummb5gd4egy3 \
	I0414 15:43:15.133862 1908903 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4153ea3e283aa4b6e0f8a22db86b20faadf1e5e1b2541eaf9963ff3308e22b8f 
	I0414 15:43:15.134716 1908903 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 15:43:15.134833 1908903 cni.go:84] Creating CNI manager for "bridge"
	I0414 15:43:15.137761 1908903 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0414 15:43:15.139072 1908903 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0414 15:43:15.150727 1908903 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0414 15:43:15.172753 1908903 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 15:43:15.172843 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:15.172878 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-036922 minikube.k8s.io/updated_at=2025_04_14T15_43_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=ed8f1f01b35eff2786f40199152a1775806f2de2 minikube.k8s.io/name=bridge-036922 minikube.k8s.io/primary=true
	I0414 15:43:15.337474 1908903 ops.go:34] apiserver oom_adj: -16
	I0414 15:43:15.337603 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:13.862759 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:15.864154 1907421 pod_ready.go:103] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:17.367663 1907421 pod_ready.go:93] pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.367697 1907421 pod_ready.go:82] duration metric: took 16.511977863s for pod "coredns-668d6bf9bc-8lknp" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.367714 1907421 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.379462 1907421 pod_ready.go:93] pod "etcd-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.379493 1907421 pod_ready.go:82] duration metric: took 11.770579ms for pod "etcd-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.379508 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.386865 1907421 pod_ready.go:93] pod "kube-apiserver-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.386898 1907421 pod_ready.go:82] duration metric: took 7.382173ms for pod "kube-apiserver-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.386913 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.392211 1907421 pod_ready.go:93] pod "kube-controller-manager-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.392234 1907421 pod_ready.go:82] duration metric: took 5.31374ms for pod "kube-controller-manager-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.392243 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-7zd42" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.397287 1907421 pod_ready.go:93] pod "kube-proxy-7zd42" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.397312 1907421 pod_ready.go:82] duration metric: took 5.062669ms for pod "kube-proxy-7zd42" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.397322 1907421 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.759889 1907421 pod_ready.go:93] pod "kube-scheduler-flannel-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:17.759918 1907421 pod_ready.go:82] duration metric: took 362.587262ms for pod "kube-scheduler-flannel-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:17.759930 1907421 pod_ready.go:39] duration metric: took 16.907709508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:17.759949 1907421 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:43:17.760002 1907421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:43:17.779080 1907421 api_server.go:72] duration metric: took 23.215997595s to wait for apiserver process to appear ...
	I0414 15:43:17.779114 1907421 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:43:17.779133 1907421 api_server.go:253] Checking apiserver healthz at https://192.168.72.200:8443/healthz ...
	I0414 15:43:17.786736 1907421 api_server.go:279] https://192.168.72.200:8443/healthz returned 200:
	ok
	I0414 15:43:17.787921 1907421 api_server.go:141] control plane version: v1.32.2
	I0414 15:43:17.787946 1907421 api_server.go:131] duration metric: took 8.826568ms to wait for apiserver health ...
	I0414 15:43:17.787956 1907421 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:43:17.961903 1907421 system_pods.go:59] 7 kube-system pods found
	I0414 15:43:17.961950 1907421 system_pods.go:61] "coredns-668d6bf9bc-8lknp" [06667bb5-e553-4c4f-abf5-d8c01729ea1d] Running
	I0414 15:43:17.961956 1907421 system_pods.go:61] "etcd-flannel-036922" [c2a29905-84cb-4e69-8a15-0525ae990e24] Running
	I0414 15:43:17.961959 1907421 system_pods.go:61] "kube-apiserver-flannel-036922" [d9336840-d608-4c31-bf23-e479553bf106] Running
	I0414 15:43:17.961964 1907421 system_pods.go:61] "kube-controller-manager-flannel-036922" [31280388-ed00-4b11-bc68-0cafdecc33e6] Running
	I0414 15:43:17.961971 1907421 system_pods.go:61] "kube-proxy-7zd42" [671465e4-9ea3-4a36-8cc1-5a7c303837b2] Running
	I0414 15:43:17.961975 1907421 system_pods.go:61] "kube-scheduler-flannel-036922" [cc061af0-8dee-4822-8f03-17ab374c2c08] Running
	I0414 15:43:17.961979 1907421 system_pods.go:61] "storage-provisioner" [d5b335f4-e0d4-48bb-9aa8-9ee2a9619b48] Running
	I0414 15:43:17.961987 1907421 system_pods.go:74] duration metric: took 174.024277ms to wait for pod list to return data ...
	I0414 15:43:17.962002 1907421 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:43:15.837661 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:16.338010 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:16.838499 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:17.338708 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:17.838651 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:18.338085 1908903 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 15:43:18.435813 1908903 kubeadm.go:1113] duration metric: took 3.263041868s to wait for elevateKubeSystemPrivileges
	I0414 15:43:18.435864 1908903 kubeadm.go:394] duration metric: took 14.12406212s to StartCluster
	I0414 15:43:18.435891 1908903 settings.go:142] acquiring lock: {Name:mkf8fdccd744793c9a876a07da6b33fabe880d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:18.435976 1908903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:43:18.437104 1908903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20512-1845971/kubeconfig: {Name:mk700cb2cf46a87df11c1873f52c26c76c14915e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 15:43:18.437365 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 15:43:18.437363 1908903 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.165 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 15:43:18.437461 1908903 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0414 15:43:18.437538 1908903 addons.go:69] Setting storage-provisioner=true in profile "bridge-036922"
	I0414 15:43:18.437559 1908903 addons.go:238] Setting addon storage-provisioner=true in "bridge-036922"
	I0414 15:43:18.437605 1908903 host.go:66] Checking if "bridge-036922" exists ...
	I0414 15:43:18.437618 1908903 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:43:18.437553 1908903 addons.go:69] Setting default-storageclass=true in profile "bridge-036922"
	I0414 15:43:18.437690 1908903 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-036922"
	I0414 15:43:18.438070 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.438102 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.438141 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.438106 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.439155 1908903 out.go:177] * Verifying Kubernetes components...
	I0414 15:43:18.440542 1908903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 15:43:18.455641 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0414 15:43:18.456194 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.456697 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.456719 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.457142 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.457599 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.457622 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.460636 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0414 15:43:18.461223 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.461758 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.461782 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.462163 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.462386 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.466166 1908903 addons.go:238] Setting addon default-storageclass=true in "bridge-036922"
	I0414 15:43:18.466211 1908903 host.go:66] Checking if "bridge-036922" exists ...
	I0414 15:43:18.466628 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.466677 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.475648 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46661
	I0414 15:43:18.476519 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.477273 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.477298 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.477770 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.477988 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.480218 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:43:18.482224 1908903 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 15:43:18.162281 1907421 default_sa.go:45] found service account: "default"
	I0414 15:43:18.162314 1907421 default_sa.go:55] duration metric: took 200.300594ms for default service account to be created ...
	I0414 15:43:18.162327 1907421 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:43:18.361676 1907421 system_pods.go:86] 7 kube-system pods found
	I0414 15:43:18.361730 1907421 system_pods.go:89] "coredns-668d6bf9bc-8lknp" [06667bb5-e553-4c4f-abf5-d8c01729ea1d] Running
	I0414 15:43:18.361739 1907421 system_pods.go:89] "etcd-flannel-036922" [c2a29905-84cb-4e69-8a15-0525ae990e24] Running
	I0414 15:43:18.361745 1907421 system_pods.go:89] "kube-apiserver-flannel-036922" [d9336840-d608-4c31-bf23-e479553bf106] Running
	I0414 15:43:18.361757 1907421 system_pods.go:89] "kube-controller-manager-flannel-036922" [31280388-ed00-4b11-bc68-0cafdecc33e6] Running
	I0414 15:43:18.361762 1907421 system_pods.go:89] "kube-proxy-7zd42" [671465e4-9ea3-4a36-8cc1-5a7c303837b2] Running
	I0414 15:43:18.361767 1907421 system_pods.go:89] "kube-scheduler-flannel-036922" [cc061af0-8dee-4822-8f03-17ab374c2c08] Running
	I0414 15:43:18.361780 1907421 system_pods.go:89] "storage-provisioner" [d5b335f4-e0d4-48bb-9aa8-9ee2a9619b48] Running
	I0414 15:43:18.361790 1907421 system_pods.go:126] duration metric: took 199.454049ms to wait for k8s-apps to be running ...
	I0414 15:43:18.361798 1907421 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:43:18.361862 1907421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:43:18.378922 1907421 system_svc.go:56] duration metric: took 17.110809ms WaitForService to wait for kubelet
	I0414 15:43:18.378962 1907421 kubeadm.go:582] duration metric: took 23.815883488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:43:18.378990 1907421 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:43:18.561117 1907421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:43:18.561152 1907421 node_conditions.go:123] node cpu capacity is 2
	I0414 15:43:18.561172 1907421 node_conditions.go:105] duration metric: took 182.174643ms to run NodePressure ...
	I0414 15:43:18.561187 1907421 start.go:241] waiting for startup goroutines ...
	I0414 15:43:18.561195 1907421 start.go:246] waiting for cluster config update ...
	I0414 15:43:18.561210 1907421 start.go:255] writing updated cluster config ...
	I0414 15:43:18.561585 1907421 ssh_runner.go:195] Run: rm -f paused
	I0414 15:43:18.616899 1907421 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:43:18.619008 1907421 out.go:177] * Done! kubectl is now configured to use "flannel-036922" cluster and "default" namespace by default
	I0414 15:43:18.483663 1908903 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:43:18.483687 1908903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 15:43:18.483716 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:43:18.487404 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.487941 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:43:18.487975 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.488288 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:43:18.488518 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:43:18.488630 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37421
	I0414 15:43:18.488862 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:43:18.489039 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:43:18.489197 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.489670 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.489697 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.490323 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.490968 1908903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:43:18.491008 1908903 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:43:18.507682 1908903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0414 15:43:18.508081 1908903 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:43:18.508551 1908903 main.go:141] libmachine: Using API Version  1
	I0414 15:43:18.508583 1908903 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:43:18.509071 1908903 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:43:18.509269 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetState
	I0414 15:43:18.511125 1908903 main.go:141] libmachine: (bridge-036922) Calling .DriverName
	I0414 15:43:18.511478 1908903 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 15:43:18.511517 1908903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 15:43:18.511540 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHHostname
	I0414 15:43:18.514530 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.515112 1908903 main.go:141] libmachine: (bridge-036922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:e5:52", ip: ""} in network mk-bridge-036922: {Iface:virbr3 ExpiryTime:2025-04-14 16:42:48 +0000 UTC Type:0 Mac:52:54:00:d8:e5:52 Iaid: IPaddr:192.168.61.165 Prefix:24 Hostname:bridge-036922 Clientid:01:52:54:00:d8:e5:52}
	I0414 15:43:18.515146 1908903 main.go:141] libmachine: (bridge-036922) DBG | domain bridge-036922 has defined IP address 192.168.61.165 and MAC address 52:54:00:d8:e5:52 in network mk-bridge-036922
	I0414 15:43:18.515270 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHPort
	I0414 15:43:18.515457 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHKeyPath
	I0414 15:43:18.515631 1908903 main.go:141] libmachine: (bridge-036922) Calling .GetSSHUsername
	I0414 15:43:18.515798 1908903 sshutil.go:53] new ssh client: &{IP:192.168.61.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/bridge-036922/id_rsa Username:docker}
	I0414 15:43:18.606562 1908903 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 15:43:18.657948 1908903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 15:43:18.790591 1908903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 15:43:18.828450 1908903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 15:43:19.267941 1908903 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0414 15:43:19.268837 1908903 node_ready.go:35] waiting up to 15m0s for node "bridge-036922" to be "Ready" ...
	I0414 15:43:19.321139 1908903 node_ready.go:49] node "bridge-036922" has status "Ready":"True"
	I0414 15:43:19.321167 1908903 node_ready.go:38] duration metric: took 52.287821ms for node "bridge-036922" to be "Ready" ...
	I0414 15:43:19.321178 1908903 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:19.337686 1908903 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:19.349044 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.349081 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.349385 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.349403 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.349415 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.349423 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.349687 1908903 main.go:141] libmachine: (bridge-036922) DBG | Closing plugin on server side
	I0414 15:43:19.349706 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.349721 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.430028 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.430059 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.430405 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.430426 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.738624 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.738657 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.738999 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.739023 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.739028 1908903 main.go:141] libmachine: (bridge-036922) DBG | Closing plugin on server side
	I0414 15:43:19.739033 1908903 main.go:141] libmachine: Making call to close driver server
	I0414 15:43:19.739051 1908903 main.go:141] libmachine: (bridge-036922) Calling .Close
	I0414 15:43:19.740924 1908903 main.go:141] libmachine: Successfully made call to close driver server
	I0414 15:43:19.740945 1908903 main.go:141] libmachine: Making call to close connection to plugin binary
	I0414 15:43:19.743503 1908903 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0414 15:43:19.744467 1908903 addons.go:514] duration metric: took 1.307001565s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0414 15:43:19.772839 1908903 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-036922" context rescaled to 1 replicas
	I0414 15:43:21.344207 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:23.843345 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:25.844108 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:28.344124 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:30.352697 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:32.843340 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:34.845218 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:36.845518 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:39.345615 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:41.844911 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:44.344880 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:46.844497 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:49.345176 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:51.843480 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:53.843574 1908903 pod_ready.go:103] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"False"
	I0414 15:43:55.845252 1908903 pod_ready.go:93] pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.845283 1908903 pod_ready.go:82] duration metric: took 36.507553933s for pod "coredns-668d6bf9bc-htdqv" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.845297 1908903 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.847823 1908903 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-sf5z2" not found
	I0414 15:43:55.847853 1908903 pod_ready.go:82] duration metric: took 2.54674ms for pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace to be "Ready" ...
	E0414 15:43:55.847867 1908903 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-sf5z2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-sf5z2" not found
	I0414 15:43:55.847875 1908903 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.852708 1908903 pod_ready.go:93] pod "etcd-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.852735 1908903 pod_ready.go:82] duration metric: took 4.851802ms for pod "etcd-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.852747 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.857917 1908903 pod_ready.go:93] pod "kube-apiserver-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.857941 1908903 pod_ready.go:82] duration metric: took 5.186792ms for pod "kube-apiserver-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.857954 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.862028 1908903 pod_ready.go:93] pod "kube-controller-manager-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:55.862052 1908903 pod_ready.go:82] duration metric: took 4.089611ms for pod "kube-controller-manager-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:55.862066 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-m4qjw" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.042402 1908903 pod_ready.go:93] pod "kube-proxy-m4qjw" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:56.042429 1908903 pod_ready.go:82] duration metric: took 180.35577ms for pod "kube-proxy-m4qjw" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.042439 1908903 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.445374 1908903 pod_ready.go:93] pod "kube-scheduler-bridge-036922" in "kube-system" namespace has status "Ready":"True"
	I0414 15:43:56.445414 1908903 pod_ready.go:82] duration metric: took 402.96709ms for pod "kube-scheduler-bridge-036922" in "kube-system" namespace to be "Ready" ...
	I0414 15:43:56.445428 1908903 pod_ready.go:39] duration metric: took 37.124235593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 15:43:56.445456 1908903 api_server.go:52] waiting for apiserver process to appear ...
	I0414 15:43:56.445532 1908903 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:43:56.460913 1908903 api_server.go:72] duration metric: took 38.023515623s to wait for apiserver process to appear ...
	I0414 15:43:56.460945 1908903 api_server.go:88] waiting for apiserver healthz status ...
	I0414 15:43:56.460966 1908903 api_server.go:253] Checking apiserver healthz at https://192.168.61.165:8443/healthz ...
	I0414 15:43:56.465387 1908903 api_server.go:279] https://192.168.61.165:8443/healthz returned 200:
	ok
	I0414 15:43:56.466323 1908903 api_server.go:141] control plane version: v1.32.2
	I0414 15:43:56.466347 1908903 api_server.go:131] duration metric: took 5.396979ms to wait for apiserver health ...
	I0414 15:43:56.466356 1908903 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 15:43:56.642591 1908903 system_pods.go:59] 7 kube-system pods found
	I0414 15:43:56.642633 1908903 system_pods.go:61] "coredns-668d6bf9bc-htdqv" [c857ef30-5813-45fe-b25e-3baa663ae97e] Running
	I0414 15:43:56.642641 1908903 system_pods.go:61] "etcd-bridge-036922" [db3aa367-4ce7-46f8-9836-5dd5993c5db9] Running
	I0414 15:43:56.642647 1908903 system_pods.go:61] "kube-apiserver-bridge-036922" [89106101-c303-4d87-be62-98869183e702] Running
	I0414 15:43:56.642653 1908903 system_pods.go:61] "kube-controller-manager-bridge-036922" [03e28ccd-fe05-4c06-a146-f732f20cfd9f] Running
	I0414 15:43:56.642657 1908903 system_pods.go:61] "kube-proxy-m4qjw" [92068c58-57c5-4fdb-a990-24376f951c61] Running
	I0414 15:43:56.642662 1908903 system_pods.go:61] "kube-scheduler-bridge-036922" [7101510d-cae7-4e98-b155-044417258287] Running
	I0414 15:43:56.642667 1908903 system_pods.go:61] "storage-provisioner" [a7921eed-0433-4ab8-a62a-1c3d799d30ce] Running
	I0414 15:43:56.642676 1908903 system_pods.go:74] duration metric: took 176.312498ms to wait for pod list to return data ...
	I0414 15:43:56.642689 1908903 default_sa.go:34] waiting for default service account to be created ...
	I0414 15:43:56.844349 1908903 default_sa.go:45] found service account: "default"
	I0414 15:43:56.844380 1908903 default_sa.go:55] duration metric: took 201.684045ms for default service account to be created ...
	I0414 15:43:56.844392 1908903 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 15:43:57.042251 1908903 system_pods.go:86] 7 kube-system pods found
	I0414 15:43:57.042294 1908903 system_pods.go:89] "coredns-668d6bf9bc-htdqv" [c857ef30-5813-45fe-b25e-3baa663ae97e] Running
	I0414 15:43:57.042303 1908903 system_pods.go:89] "etcd-bridge-036922" [db3aa367-4ce7-46f8-9836-5dd5993c5db9] Running
	I0414 15:43:57.042311 1908903 system_pods.go:89] "kube-apiserver-bridge-036922" [89106101-c303-4d87-be62-98869183e702] Running
	I0414 15:43:57.042316 1908903 system_pods.go:89] "kube-controller-manager-bridge-036922" [03e28ccd-fe05-4c06-a146-f732f20cfd9f] Running
	I0414 15:43:57.042321 1908903 system_pods.go:89] "kube-proxy-m4qjw" [92068c58-57c5-4fdb-a990-24376f951c61] Running
	I0414 15:43:57.042326 1908903 system_pods.go:89] "kube-scheduler-bridge-036922" [7101510d-cae7-4e98-b155-044417258287] Running
	I0414 15:43:57.042332 1908903 system_pods.go:89] "storage-provisioner" [a7921eed-0433-4ab8-a62a-1c3d799d30ce] Running
	I0414 15:43:57.042342 1908903 system_pods.go:126] duration metric: took 197.94205ms to wait for k8s-apps to be running ...
	I0414 15:43:57.042352 1908903 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 15:43:57.042435 1908903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:43:57.057742 1908903 system_svc.go:56] duration metric: took 15.376538ms WaitForService to wait for kubelet
	I0414 15:43:57.057778 1908903 kubeadm.go:582] duration metric: took 38.620384323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 15:43:57.057800 1908903 node_conditions.go:102] verifying NodePressure condition ...
	I0414 15:43:57.242521 1908903 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0414 15:43:57.242564 1908903 node_conditions.go:123] node cpu capacity is 2
	I0414 15:43:57.242579 1908903 node_conditions.go:105] duration metric: took 184.775007ms to run NodePressure ...
	I0414 15:43:57.242594 1908903 start.go:241] waiting for startup goroutines ...
	I0414 15:43:57.242600 1908903 start.go:246] waiting for cluster config update ...
	I0414 15:43:57.242612 1908903 start.go:255] writing updated cluster config ...
	I0414 15:43:57.242897 1908903 ssh_runner.go:195] Run: rm -f paused
	I0414 15:43:57.294073 1908903 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 15:43:57.297439 1908903 out.go:177] * Done! kubectl is now configured to use "bridge-036922" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.467086962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744646278467055570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ebb280e-1a45-414a-baed-99d7dabad1ce name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.467671569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d0ead5a-cecc-407d-9379-4289b97f1ccc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.467733584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d0ead5a-cecc-407d-9379-4289b97f1ccc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.467774947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4d0ead5a-cecc-407d-9379-4289b97f1ccc name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.500226730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d5b5fda-2997-495e-81f4-40d2d015b204 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.500330782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d5b5fda-2997-495e-81f4-40d2d015b204 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.501270558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40582e4a-4a2f-4dda-ae84-0f3fff420309 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.501673956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744646278501655571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40582e4a-4a2f-4dda-ae84-0f3fff420309 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.502189835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f95bfa38-1b1d-40e2-851f-4a9a443192eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.502241607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f95bfa38-1b1d-40e2-851f-4a9a443192eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.502278002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f95bfa38-1b1d-40e2-851f-4a9a443192eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.534642188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47e86b31-061b-4530-8d44-1dc626deebab name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.534736495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47e86b31-061b-4530-8d44-1dc626deebab name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.535782967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d77d223-ef56-4ad4-9a57-7896659a8973 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.536255373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744646278536233315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d77d223-ef56-4ad4-9a57-7896659a8973 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.536702905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=676a46e7-7471-4743-9d6a-25cf07f6e162 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.536777242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=676a46e7-7471-4743-9d6a-25cf07f6e162 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.536814834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=676a46e7-7471-4743-9d6a-25cf07f6e162 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.569843226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9462697c-fa76-4434-8579-57b2b5ad2be6 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.569942862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9462697c-fa76-4434-8579-57b2b5ad2be6 name=/runtime.v1.RuntimeService/Version
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.571555175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f33c6b1-1790-4b90-a884-abad5da2db5d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.571966262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744646278571942856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f33c6b1-1790-4b90-a884-abad5da2db5d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.572567948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86a08471-ecc7-4c8c-86a1-e98a3e20a726 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.572633217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86a08471-ecc7-4c8c-86a1-e98a3e20a726 name=/runtime.v1.RuntimeService/ListContainers
	Apr 14 15:57:58 old-k8s-version-529869 crio[627]: time="2025-04-14 15:57:58.572669986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=86a08471-ecc7-4c8c-86a1-e98a3e20a726 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr14 15:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057458] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.052593] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.402276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.030956] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.742898] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.855862] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.065964] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065397] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.221422] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.162433] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.282594] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +6.876323] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.067338] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.988800] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[Apr14 15:35] kauditd_printk_skb: 46 callbacks suppressed
	[Apr14 15:39] systemd-fstab-generator[4999]: Ignoring "noauto" option for root device
	[Apr14 15:41] systemd-fstab-generator[5283]: Ignoring "noauto" option for root device
	[  +0.099182] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:57:58 up 23 min,  0 users,  load average: 0.08, 0.04, 0.06
	Linux old-k8s-version-529869 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc00024eee0, 0x4f04d00, 0xc0004044d0)
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00085f6f0)
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000839ef0, 0x4f0ac20, 0xc000119810, 0x1, 0xc0001000c0)
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024eee0, 0xc0001000c0)
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000be1cb0, 0xc000337700)
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7106]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 14 15:57:53 old-k8s-version-529869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 175.
	Apr 14 15:57:53 old-k8s-version-529869 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 14 15:57:53 old-k8s-version-529869 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7114]: I0414 15:57:53.781482    7114 server.go:416] Version: v1.20.0
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7114]: I0414 15:57:53.782174    7114 server.go:837] Client rotation is on, will bootstrap in background
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7114]: I0414 15:57:53.784369    7114 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7114]: W0414 15:57:53.785507    7114 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 14 15:57:53 old-k8s-version-529869 kubelet[7114]: I0414 15:57:53.785563    7114 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 2 (237.368522ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-529869" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (355.20s)

                                                
                                    

Test pass (275/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.07
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 4.21
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.15
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 89.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 134.72
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 18.79
37 TestAddons/parallel/InspektorGadget 11.84
38 TestAddons/parallel/MetricsServer 6.41
40 TestAddons/parallel/CSI 65.64
41 TestAddons/parallel/Headlamp 19.84
42 TestAddons/parallel/CloudSpanner 5.86
43 TestAddons/parallel/LocalPath 54.82
44 TestAddons/parallel/NvidiaDevicePlugin 6.67
45 TestAddons/parallel/Yakd 12.57
47 TestAddons/StoppedEnableDisable 91.18
48 TestCertOptions 48.51
49 TestCertExpiration 292.66
51 TestForceSystemdFlag 75.5
52 TestForceSystemdEnv 59.38
54 TestKVMDriverInstallOrUpdate 1.36
58 TestErrorSpam/setup 40.73
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.79
61 TestErrorSpam/pause 1.68
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 5.4
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 51.62
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 53.07
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
75 TestFunctional/serial/CacheCmd/cache/add_local 1.15
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 367.15
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.43
86 TestFunctional/serial/LogsFileCmd 1.45
87 TestFunctional/serial/InvalidService 4.77
89 TestFunctional/parallel/ConfigCmd 0.38
90 TestFunctional/parallel/DashboardCmd 19.41
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.24
97 TestFunctional/parallel/ServiceCmdConnect 11.7
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 36.74
101 TestFunctional/parallel/SSHCmd 0.5
102 TestFunctional/parallel/CpCmd 1.55
103 TestFunctional/parallel/MySQL 26.32
104 TestFunctional/parallel/FileSync 0.24
105 TestFunctional/parallel/CertSync 1.43
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.84
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.43
121 TestFunctional/parallel/ImageCommands/Setup 0.48
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.8
134 TestFunctional/parallel/ProfileCmd/profile_list 0.37
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
136 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.19
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.97
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.88
143 TestFunctional/parallel/ServiceCmd/List 0.55
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
145 TestFunctional/parallel/Version/short 0.05
146 TestFunctional/parallel/Version/components 0.57
147 TestFunctional/parallel/MountCmd/any-port 18.07
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
149 TestFunctional/parallel/ServiceCmd/Format 0.43
150 TestFunctional/parallel/ServiceCmd/URL 0.61
151 TestFunctional/parallel/MountCmd/specific-port 1.22
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 199.8
161 TestMultiControlPlane/serial/DeployApp 6.47
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 52.72
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
166 TestMultiControlPlane/serial/CopyFile 13.75
167 TestMultiControlPlane/serial/StopSecondaryNode 91.37
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
169 TestMultiControlPlane/serial/RestartSecondaryNode 56.82
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 439.74
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.44
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
174 TestMultiControlPlane/serial/StopCluster 272.39
175 TestMultiControlPlane/serial/RestartCluster 133.07
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
177 TestMultiControlPlane/serial/AddSecondaryNode 78.19
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
182 TestJSONOutput/start/Command 82.67
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.71
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.68
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.4
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 90.16
214 TestMountStart/serial/StartWithMountFirst 28.88
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 28.26
217 TestMountStart/serial/VerifyMountSecond 0.4
218 TestMountStart/serial/DeleteFirst 0.58
219 TestMountStart/serial/VerifyMountPostDelete 0.41
220 TestMountStart/serial/Stop 1.54
221 TestMountStart/serial/RestartStopped 21.8
222 TestMountStart/serial/VerifyMountPostStop 0.39
225 TestMultiNode/serial/FreshStart2Nodes 115.51
226 TestMultiNode/serial/DeployApp2Nodes 5.93
227 TestMultiNode/serial/PingHostFrom2Pods 0.83
228 TestMultiNode/serial/AddNode 78.73
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.62
231 TestMultiNode/serial/CopyFile 7.67
232 TestMultiNode/serial/StopNode 2.34
233 TestMultiNode/serial/StartAfterStop 38.5
234 TestMultiNode/serial/RestartKeepsNodes 339.76
235 TestMultiNode/serial/DeleteNode 2.74
236 TestMultiNode/serial/StopMultiNode 181.87
237 TestMultiNode/serial/RestartMultiNode 112.73
238 TestMultiNode/serial/ValidateNameConflict 44.38
245 TestScheduledStopUnix 116.87
249 TestRunningBinaryUpgrade 210.77
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestPause/serial/Start 118.46
264 TestNoKubernetes/serial/StartWithK8s 101.66
265 TestNoKubernetes/serial/StartWithStopK8s 36.89
266 TestPause/serial/SecondStartNoReconfiguration 58.3
267 TestNoKubernetes/serial/Start 33.65
275 TestNetworkPlugins/group/false 4.18
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
280 TestNoKubernetes/serial/ProfileList 28.74
281 TestPause/serial/Pause 0.74
282 TestPause/serial/VerifyStatus 0.26
283 TestPause/serial/Unpause 0.66
284 TestPause/serial/PauseAgain 0.91
285 TestPause/serial/DeletePaused 0.76
286 TestPause/serial/VerifyDeletedResources 12.78
287 TestStoppedBinaryUpgrade/Setup 0.36
288 TestStoppedBinaryUpgrade/Upgrade 152.61
289 TestNoKubernetes/serial/Stop 2.54
290 TestNoKubernetes/serial/StartNoArgs 42.12
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
296 TestStartStop/group/no-preload/serial/FirstStart 76.07
297 TestStartStop/group/no-preload/serial/DeployApp 9.34
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
299 TestStartStop/group/no-preload/serial/Stop 91.14
301 TestStartStop/group/embed-certs/serial/FirstStart 87.09
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 112.87
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
305 TestStartStop/group/no-preload/serial/SecondStart 319.45
306 TestStartStop/group/embed-certs/serial/DeployApp 10.39
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
308 TestStartStop/group/embed-certs/serial/Stop 90.94
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.26
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/embed-certs/serial/SecondStart 337.73
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 339.29
318 TestStartStop/group/old-k8s-version/serial/Stop 3.3
319 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/no-preload/serial/Pause 2.85
326 TestStartStop/group/newest-cni/serial/FirstStart 48.96
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.44
329 TestStartStop/group/newest-cni/serial/Stop 10.39
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestNetworkPlugins/group/auto/Start 85.03
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
340 TestStartStop/group/embed-certs/serial/Pause 3.1
341 TestNetworkPlugins/group/kindnet/Start 67.58
342 TestNetworkPlugins/group/auto/KubeletFlags 0.24
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
344 TestNetworkPlugins/group/auto/NetCatPod 11.28
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
346 TestNetworkPlugins/group/auto/DNS 0.18
347 TestNetworkPlugins/group/auto/Localhost 0.13
348 TestNetworkPlugins/group/auto/HairPin 0.14
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.86
351 TestNetworkPlugins/group/calico/Start 84.87
352 TestNetworkPlugins/group/custom-flannel/Start 90.78
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
355 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
356 TestNetworkPlugins/group/kindnet/DNS 0.19
357 TestNetworkPlugins/group/kindnet/Localhost 0.15
358 TestNetworkPlugins/group/kindnet/HairPin 0.16
359 TestNetworkPlugins/group/enable-default-cni/Start 88.1
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.22
362 TestNetworkPlugins/group/calico/NetCatPod 10.24
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.23
365 TestNetworkPlugins/group/calico/DNS 0.25
366 TestNetworkPlugins/group/calico/Localhost 0.16
367 TestNetworkPlugins/group/calico/HairPin 0.14
368 TestNetworkPlugins/group/custom-flannel/DNS 0.2
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
371 TestNetworkPlugins/group/flannel/Start 70.56
372 TestNetworkPlugins/group/bridge/Start 96.97
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
381 TestNetworkPlugins/group/flannel/NetCatPod 9.22
382 TestNetworkPlugins/group/flannel/DNS 0.15
383 TestNetworkPlugins/group/flannel/Localhost 0.12
384 TestNetworkPlugins/group/flannel/HairPin 0.14
385 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
386 TestNetworkPlugins/group/bridge/NetCatPod 10.28
387 TestNetworkPlugins/group/bridge/DNS 0.14
388 TestNetworkPlugins/group/bridge/Localhost 0.12
389 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (8.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-370703 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-370703 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.071084796s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 14:17:10.120909 1853270 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 14:17:10.121028 1853270 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-370703
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-370703: exit status 85 (66.716261ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-370703 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |          |
	|         | -p download-only-370703        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:17:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:17:02.095237 1853282 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:17:02.095486 1853282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:02.095502 1853282 out.go:358] Setting ErrFile to fd 2...
	I0414 14:17:02.095506 1853282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:02.095678 1853282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	W0414 14:17:02.095827 1853282 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20512-1845971/.minikube/config/config.json: open /home/jenkins/minikube-integration/20512-1845971/.minikube/config/config.json: no such file or directory
	I0414 14:17:02.096434 1853282 out.go:352] Setting JSON to true
	I0414 14:17:02.097628 1853282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":35966,"bootTime":1744604256,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:17:02.097767 1853282 start.go:139] virtualization: kvm guest
	I0414 14:17:02.100227 1853282 out.go:97] [download-only-370703] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 14:17:02.100383 1853282 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 14:17:02.100445 1853282 notify.go:220] Checking for updates...
	I0414 14:17:02.101844 1853282 out.go:169] MINIKUBE_LOCATION=20512
	I0414 14:17:02.103473 1853282 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:17:02.104930 1853282 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:17:02.106249 1853282 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:17:02.107579 1853282 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 14:17:02.110161 1853282 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 14:17:02.110515 1853282 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:17:02.146090 1853282 out.go:97] Using the kvm2 driver based on user configuration
	I0414 14:17:02.146133 1853282 start.go:297] selected driver: kvm2
	I0414 14:17:02.146144 1853282 start.go:901] validating driver "kvm2" against <nil>
	I0414 14:17:02.146568 1853282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:02.146683 1853282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-1845971/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0414 14:17:02.164062 1853282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0414 14:17:02.164121 1853282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 14:17:02.164681 1853282 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0414 14:17:02.164842 1853282 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 14:17:02.164891 1853282 cni.go:84] Creating CNI manager for ""
	I0414 14:17:02.164956 1853282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0414 14:17:02.164970 1853282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0414 14:17:02.165047 1853282 start.go:340] cluster config:
	{Name:download-only-370703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-370703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:17:02.165288 1853282 iso.go:125] acquiring lock: {Name:mk9159854686c19b2179fc7bffd50051c3c78481 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 14:17:02.167387 1853282 out.go:97] Downloading VM boot image ...
	I0414 14:17:02.167454 1853282 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0414 14:17:04.948769 1853282 out.go:97] Starting "download-only-370703" primary control-plane node in "download-only-370703" cluster
	I0414 14:17:04.948795 1853282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:17:04.972970 1853282 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0414 14:17:04.973007 1853282 cache.go:56] Caching tarball of preloaded images
	I0414 14:17:04.973177 1853282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0414 14:17:04.975016 1853282 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0414 14:17:04.975045 1853282 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0414 14:17:05.002812 1853282 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-370703 host does not exist
	  To start a cluster, run: "minikube start -p download-only-370703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-370703
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-174763 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-174763 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.213405232s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 14:17:14.690062 1853270 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 14:17:14.690118 1853270 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-1845971/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-174763
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-174763: exit status 85 (66.82749ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-370703 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | -p download-only-370703        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| delete  | -p download-only-370703        | download-only-370703 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC | 14 Apr 25 14:17 UTC |
	| start   | -o=json --download-only        | download-only-174763 | jenkins | v1.35.0 | 14 Apr 25 14:17 UTC |                     |
	|         | -p download-only-174763        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 14:17:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 14:17:10.521650 1853482 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:17:10.521793 1853482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:10.521804 1853482 out.go:358] Setting ErrFile to fd 2...
	I0414 14:17:10.521807 1853482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:17:10.522027 1853482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:17:10.522692 1853482 out.go:352] Setting JSON to true
	I0414 14:17:10.523885 1853482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":35975,"bootTime":1744604256,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:17:10.523986 1853482 start.go:139] virtualization: kvm guest
	I0414 14:17:10.525896 1853482 out.go:97] [download-only-174763] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:17:10.526051 1853482 notify.go:220] Checking for updates...
	I0414 14:17:10.527413 1853482 out.go:169] MINIKUBE_LOCATION=20512
	I0414 14:17:10.528856 1853482 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:17:10.530202 1853482 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:17:10.531525 1853482 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:17:10.532783 1853482 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-174763 host does not exist
	  To start a cluster, run: "minikube start -p download-only-174763"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-174763
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 14:17:15.340951 1853270 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-452877 --alsologtostderr --binary-mirror http://127.0.0.1:42987 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-452877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-452877
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (89.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-470176 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-470176 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.832163046s)
helpers_test.go:175: Cleaning up "offline-crio-470176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-470176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-470176: (1.054180827s)
--- PASS: TestOffline (89.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-885191
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-885191: exit status 85 (59.779699ms)

                                                
                                                
-- stdout --
	* Profile "addons-885191" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885191"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-885191
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-885191: exit status 85 (60.518657ms)

                                                
                                                
-- stdout --
	* Profile "addons-885191" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885191"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (134.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-885191 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-885191 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.715835294s)
--- PASS: TestAddons/Setup (134.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-885191 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-885191 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-885191 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-885191 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ef6f8d70-e12e-4fee-9ae6-742fa4df29ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ef6f8d70-e12e-4fee-9ae6-742fa4df29ac] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004611523s
addons_test.go:633: (dbg) Run:  kubectl --context addons-885191 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-885191 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-885191 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 9.755877ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-glhj8" [c9af8f5b-2acb-40bb-bf80-598ad76b971c] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.192309674s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8b99t" [493c08aa-c265-4baf-ac9c-36c60e189a83] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003584627s
addons_test.go:331: (dbg) Run:  kubectl --context addons-885191 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-885191 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-885191 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.374433307s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 ip
2025/04/14 14:20:07 [DEBUG] GET http://192.168.39.123:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable registry --alsologtostderr -v=1: (1.033543367s)
--- PASS: TestAddons/parallel/Registry (18.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h7dcz" [e93b3197-6871-4ba9-a135-8efcfbf2ff6e] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003206979s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable inspektor-gadget --alsologtostderr -v=1: (5.836061153s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0414 14:19:49.396414 1853270 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 14:19:49.404359 1853270 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 14:19:49.404399 1853270 kapi.go:107] duration metric: took 8.002596ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:394: metrics-server stabilized in 8.416954ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-4wklt" [e0f2fc09-896e-4cb8-8c80-f2a06e7414ec] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.193403557s
addons_test.go:402: (dbg) Run:  kubectl --context addons-885191 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable metrics-server --alsologtostderr -v=1: (1.145036971s)
--- PASS: TestAddons/parallel/MetricsServer (6.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.015405ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-885191 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-885191 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [807ec094-d0b6-489c-b12c-e7be872674cb] Pending
helpers_test.go:344: "task-pv-pod" [807ec094-d0b6-489c-b12c-e7be872674cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [807ec094-d0b6-489c-b12c-e7be872674cb] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.013718697s
addons_test.go:511: (dbg) Run:  kubectl --context addons-885191 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885191 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885191 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-885191 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-885191 delete pod task-pv-pod: (1.777077058s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-885191 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-885191 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-885191 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [26112d58-08bf-4e3f-aa2c-1a5e25d293f7] Pending
helpers_test.go:344: "task-pv-pod-restore" [26112d58-08bf-4e3f-aa2c-1a5e25d293f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [26112d58-08bf-4e3f-aa2c-1a5e25d293f7] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003927401s
addons_test.go:553: (dbg) Run:  kubectl --context addons-885191 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-885191 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-885191 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable volumesnapshots --alsologtostderr -v=1: (1.044139112s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.960064697s)
--- PASS: TestAddons/parallel/CSI (65.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-885191 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-885191 --alsologtostderr -v=1: (1.00174287s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-nf9jz" [61bd3f77-8f89-417b-acb3-f6919ebcd665] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-nf9jz" [61bd3f77-8f89-417b-acb3-f6919ebcd665] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-nf9jz" [61bd3f77-8f89-417b-acb3-f6919ebcd665] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006325838s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable headlamp --alsologtostderr -v=1: (6.83548295s)
--- PASS: TestAddons/parallel/Headlamp (19.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-l8nqg" [e995ffb1-e549-444e-9a75-1a08a12c6ebd] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003856424s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-885191 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-885191 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ef3596da-a2b5-4d58-bcbe-8174bf23a54c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ef3596da-a2b5-4d58-bcbe-8174bf23a54c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ef3596da-a2b5-4d58-bcbe-8174bf23a54c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00468514s
addons_test.go:906: (dbg) Run:  kubectl --context addons-885191 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 ssh "cat /opt/local-path-provisioner/pvc-f2e5318b-1d01-41b3-99fd-f0b6fdfd26b4_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-885191 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-885191 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.846746564s)
--- PASS: TestAddons/parallel/LocalPath (54.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cgpdg" [4c80fba9-8b9a-4a73-8daf-be4580ea3fde] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003866208s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-kb9bs" [bf1cc0f5-6908-4fb9-ba52-ae26209a5307] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.016204666s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-885191 addons disable yakd --alsologtostderr -v=1: (6.547758979s)
--- PASS: TestAddons/parallel/Yakd (12.57s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-885191
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-885191: (1m30.868453303s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-885191
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-885191
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-885191
--- PASS: TestAddons/StoppedEnableDisable (91.18s)

                                                
                                    
x
+
TestCertOptions (48.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-722854 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-722854 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.088644806s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-722854 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-722854 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-722854 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-722854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-722854
--- PASS: TestCertOptions (48.51s)

                                                
                                    
x
+
TestCertExpiration (292.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-197648 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-197648 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m20.938305718s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-197648 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-197648 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.77386223s)
helpers_test.go:175: Cleaning up "cert-expiration-197648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-197648
--- PASS: TestCertExpiration (292.66s)

                                                
                                    
x
+
TestForceSystemdFlag (75.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-470470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-470470 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.363746963s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-470470 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-470470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-470470
--- PASS: TestForceSystemdFlag (75.50s)

                                                
                                    
x
+
TestForceSystemdEnv (59.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-207787 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-207787 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.47062317s)
helpers_test.go:175: Cleaning up "force-systemd-env-207787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-207787
--- PASS: TestForceSystemdEnv (59.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 15:24:53.099539 1853270 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 15:24:53.099705 1853270 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 15:24:53.136092 1853270 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 15:24:53.136279 1853270 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 15:24:53.136326 1853270 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4051315924/001/docker-machine-driver-kvm2
I0414 15:24:53.263509 1853270 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4051315924/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000656f28 gz:0xc000656fd0 tar:0xc000656f80 tar.bz2:0xc000656f90 tar.gz:0xc000656fa0 tar.xz:0xc000656fb0 tar.zst:0xc000656fc0 tbz2:0xc000656f90 tgz:0xc000656fa0 txz:0xc000656fb0 tzst:0xc000656fc0 xz:0xc000656fd8 zip:0xc000656ff0 zst:0xc000657000] Getters:map[file:0xc001a00e20 http:0xc0008a7d10 https:0xc0008a7e00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 15:24:53.263574 1853270 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4051315924/001/docker-machine-driver-kvm2
I0414 15:24:53.917708 1853270 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 15:24:53.917841 1853270 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 15:24:53.952056 1853270 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 15:24:53.952105 1853270 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 15:24:53.952203 1853270 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 15:24:53.952252 1853270 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4051315924/002/docker-machine-driver-kvm2
I0414 15:24:53.980872 1853270 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4051315924/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000656f28 gz:0xc000656fd0 tar:0xc000656f80 tar.bz2:0xc000656f90 tar.gz:0xc000656fa0 tar.xz:0xc000656fb0 tar.zst:0xc000656fc0 tbz2:0xc000656f90 tgz:0xc000656fa0 txz:0xc000656fb0 tzst:0xc000656fc0 xz:0xc000656fd8 zip:0xc000656ff0 zst:0xc000657000] Getters:map[file:0xc001d933a0 http:0xc001da8eb0 https:0xc001da8f00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0414 15:24:53.980925 1853270 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4051315924/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.36s)

                                                
                                    
x
+
TestErrorSpam/setup (40.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-861520 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-861520 --driver=kvm2  --container-runtime=crio
E0414 14:24:31.490769 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.497214 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.508692 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.530228 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.571771 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.653273 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:31.814894 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:32.136682 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:32.778823 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:34.060553 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:36.622565 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:41.744402 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:24:51.986116 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-861520 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-861520 --driver=kvm2  --container-runtime=crio: (40.726481659s)
--- PASS: TestErrorSpam/setup (40.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (5.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop: (2.285009758s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop: (1.686368769s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-861520 --log_dir /tmp/nospam-861520 stop: (1.42580657s)
--- PASS: TestErrorSpam/stop (5.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20512-1845971/.minikube/files/etc/test/nested/copy/1853270/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.62s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0414 14:25:12.467911 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:25:53.429392 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-907700 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (51.617291575s)
--- PASS: TestFunctional/serial/StartWithProxy (51.62s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 14:26:03.047401 1853270 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-907700 --alsologtostderr -v=8: (53.066529079s)
functional_test.go:680: soft start took 53.067394219s for "functional-907700" cluster.
I0414 14:26:56.114410 1853270 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (53.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-907700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:3.1: (1.046385252s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:3.3: (1.111018411s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 cache add registry.k8s.io/pause:latest: (1.102518953s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-907700 /tmp/TestFunctionalserialCacheCmdcacheadd_local3918770811/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache add minikube-local-cache-test:functional-907700
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache delete minikube-local-cache-test:functional-907700
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-907700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.799092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 cache reload: (1.013023962s)
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 kubectl -- --context functional-907700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-907700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (367.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 14:27:15.351987 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:29:31.491331 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:29:59.200917 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-907700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m7.151222512s)
functional_test.go:778: restart took 6m7.15139728s for "functional-907700" cluster.
I0414 14:33:10.264585 1853270 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (367.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-907700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 logs: (1.429907845s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 logs --file /tmp/TestFunctionalserialLogsFileCmd1149479557/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 logs --file /tmp/TestFunctionalserialLogsFileCmd1149479557/001/logs.txt: (1.443772422s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-907700 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-907700
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-907700: exit status 115 (287.818191ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.222:30656 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-907700 delete -f testdata/invalidsvc.yaml
functional_test.go:2344: (dbg) Done: kubectl --context functional-907700 delete -f testdata/invalidsvc.yaml: (1.269209306s)
--- PASS: TestFunctional/serial/InvalidService (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 config get cpus: exit status 14 (57.804398ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 config get cpus: exit status 14 (58.493572ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-907700 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-907700 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1863108: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-907700 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.765342ms)

                                                
                                                
-- stdout --
	* [functional-907700] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:33:36.692561 1863006 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:33:36.692669 1863006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:33:36.692674 1863006 out.go:358] Setting ErrFile to fd 2...
	I0414 14:33:36.692678 1863006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:33:36.692923 1863006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:33:36.693483 1863006 out.go:352] Setting JSON to false
	I0414 14:33:36.694708 1863006 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":36961,"bootTime":1744604256,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:33:36.694778 1863006 start.go:139] virtualization: kvm guest
	I0414 14:33:36.696855 1863006 out.go:177] * [functional-907700] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 14:33:36.698285 1863006 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:33:36.698294 1863006 notify.go:220] Checking for updates...
	I0414 14:33:36.700920 1863006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:33:36.702346 1863006 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:33:36.703622 1863006 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:33:36.704863 1863006 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:33:36.706163 1863006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:33:36.708093 1863006 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:33:36.708573 1863006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:33:36.708678 1863006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:33:36.726899 1863006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0414 14:33:36.727446 1863006 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:33:36.728128 1863006 main.go:141] libmachine: Using API Version  1
	I0414 14:33:36.728153 1863006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:33:36.728530 1863006 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:33:36.728770 1863006 main.go:141] libmachine: (functional-907700) Calling .DriverName
	I0414 14:33:36.729040 1863006 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:33:36.729365 1863006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:33:36.729416 1863006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:33:36.746431 1863006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I0414 14:33:36.747025 1863006 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:33:36.747570 1863006 main.go:141] libmachine: Using API Version  1
	I0414 14:33:36.747597 1863006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:33:36.747930 1863006 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:33:36.748131 1863006 main.go:141] libmachine: (functional-907700) Calling .DriverName
	I0414 14:33:36.785965 1863006 out.go:177] * Using the kvm2 driver based on existing profile
	I0414 14:33:36.787444 1863006 start.go:297] selected driver: kvm2
	I0414 14:33:36.787467 1863006 start.go:901] validating driver "kvm2" against &{Name:functional-907700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-907700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:33:36.787629 1863006 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:33:36.789906 1863006 out.go:201] 
	W0414 14:33:36.791445 1863006 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 14:33:36.792886 1863006 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907700 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-907700 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.493066ms)

                                                
                                                
-- stdout --
	* [functional-907700] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:33:31.610451 1862368 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:33:31.610588 1862368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:33:31.610602 1862368 out.go:358] Setting ErrFile to fd 2...
	I0414 14:33:31.610608 1862368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:33:31.611659 1862368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:33:31.612717 1862368 out.go:352] Setting JSON to false
	I0414 14:33:31.613919 1862368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":36956,"bootTime":1744604256,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 14:33:31.614033 1862368 start.go:139] virtualization: kvm guest
	I0414 14:33:31.616246 1862368 out.go:177] * [functional-907700] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 14:33:31.618401 1862368 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 14:33:31.618420 1862368 notify.go:220] Checking for updates...
	I0414 14:33:31.620616 1862368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 14:33:31.621803 1862368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 14:33:31.623102 1862368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 14:33:31.624237 1862368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 14:33:31.625400 1862368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 14:33:31.626873 1862368 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:33:31.627338 1862368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:33:31.627439 1862368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:33:31.644735 1862368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
	I0414 14:33:31.645253 1862368 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:33:31.645882 1862368 main.go:141] libmachine: Using API Version  1
	I0414 14:33:31.645918 1862368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:33:31.646308 1862368 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:33:31.646549 1862368 main.go:141] libmachine: (functional-907700) Calling .DriverName
	I0414 14:33:31.646822 1862368 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 14:33:31.647245 1862368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:33:31.647296 1862368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:33:31.663675 1862368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0414 14:33:31.664229 1862368 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:33:31.664858 1862368 main.go:141] libmachine: Using API Version  1
	I0414 14:33:31.664880 1862368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:33:31.665239 1862368 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:33:31.665443 1862368 main.go:141] libmachine: (functional-907700) Calling .DriverName
	I0414 14:33:31.701481 1862368 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0414 14:33:31.702760 1862368 start.go:297] selected driver: kvm2
	I0414 14:33:31.702778 1862368 start.go:901] validating driver "kvm2" against &{Name:functional-907700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-907700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.222 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 14:33:31.702900 1862368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 14:33:31.704775 1862368 out.go:201] 
	W0414 14:33:31.705912 1862368 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 14:33:31.706936 1862368 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-907700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-907700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-sb25x" [068a030d-4172-496b-87a5-2d59805690b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-sb25x" [068a030d-4172-496b-87a5-2d59805690b4] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003072779s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.222:32491
functional_test.go:1692: http://192.168.39.222:32491: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-sb25x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.222:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.222:32491
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fa79a2b6-2b2a-48ca-9ee6-7f8b39d34223] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003841428s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-907700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-907700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-907700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-907700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0ddd57f8-60ef-4c0f-8e1c-24c87d89a81f] Pending
helpers_test.go:344: "sp-pod" [0ddd57f8-60ef-4c0f-8e1c-24c87d89a81f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0ddd57f8-60ef-4c0f-8e1c-24c87d89a81f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004805992s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-907700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-907700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-907700 delete -f testdata/storage-provisioner/pod.yaml: (1.781660319s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-907700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [07892250-0994-4219-af0b-f528bdd954f3] Pending
helpers_test.go:344: "sp-pod" [07892250-0994-4219-af0b-f528bdd954f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [07892250-0994-4219-af0b-f528bdd954f3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.008368777s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-907700 exec sp-pod -- ls /tmp/mount
2025/04/14 14:33:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh -n functional-907700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cp functional-907700:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2391429482/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh -n functional-907700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh -n functional-907700 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-907700 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-2frbr" [8bad8d49-5961-41b0-8bc2-8257224b3cc8] Pending
helpers_test.go:344: "mysql-58ccfd96bb-2frbr" [8bad8d49-5961-41b0-8bc2-8257224b3cc8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-2frbr" [8bad8d49-5961-41b0-8bc2-8257224b3cc8] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004412406s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-907700 exec mysql-58ccfd96bb-2frbr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-907700 exec mysql-58ccfd96bb-2frbr -- mysql -ppassword -e "show databases;": exit status 1 (292.441464ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:33:50.137391 1853270 retry.go:31] will retry after 1.073601406s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-907700 exec mysql-58ccfd96bb-2frbr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-907700 exec mysql-58ccfd96bb-2frbr -- mysql -ppassword -e "show databases;": exit status 1 (412.697737ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0414 14:33:51.624933 1853270 retry.go:31] will retry after 1.072374193s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-907700 exec mysql-58ccfd96bb-2frbr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1853270/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /etc/test/nested/copy/1853270/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1853270.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /etc/ssl/certs/1853270.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1853270.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /usr/share/ca-certificates/1853270.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/18532702.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /etc/ssl/certs/18532702.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/18532702.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /usr/share/ca-certificates/18532702.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-907700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh "sudo systemctl is-active docker": exit status 1 (317.22011ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh "sudo systemctl is-active containerd": exit status 1 (273.436898ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-907700
localhost/kicbase/echo-server:functional-907700
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907700 image ls --format short --alsologtostderr:
I0414 14:33:53.220961 1863720 out.go:345] Setting OutFile to fd 1 ...
I0414 14:33:53.221256 1863720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:53.221267 1863720 out.go:358] Setting ErrFile to fd 2...
I0414 14:33:53.221271 1863720 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:53.221475 1863720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
I0414 14:33:53.222044 1863720 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:53.222143 1863720 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:53.222521 1863720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:53.222589 1863720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:53.240383 1863720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
I0414 14:33:53.241067 1863720 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:53.241694 1863720 main.go:141] libmachine: Using API Version  1
I0414 14:33:53.241715 1863720 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:53.242207 1863720 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:53.242436 1863720 main.go:141] libmachine: (functional-907700) Calling .GetState
I0414 14:33:53.244668 1863720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:53.244718 1863720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:53.267032 1863720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
I0414 14:33:53.267520 1863720 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:53.268030 1863720 main.go:141] libmachine: Using API Version  1
I0414 14:33:53.268052 1863720 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:53.268404 1863720 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:53.268610 1863720 main.go:141] libmachine: (functional-907700) Calling .DriverName
I0414 14:33:53.268826 1863720 ssh_runner.go:195] Run: systemctl --version
I0414 14:33:53.268854 1863720 main.go:141] libmachine: (functional-907700) Calling .GetSSHHostname
I0414 14:33:53.272271 1863720 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:53.272838 1863720 main.go:141] libmachine: (functional-907700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a3:7c", ip: ""} in network mk-functional-907700: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:26 +0000 UTC Type:0 Mac:52:54:00:9d:a3:7c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:functional-907700 Clientid:01:52:54:00:9d:a3:7c}
I0414 14:33:53.272874 1863720 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined IP address 192.168.39.222 and MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:53.273020 1863720 main.go:141] libmachine: (functional-907700) Calling .GetSSHPort
I0414 14:33:53.273206 1863720 main.go:141] libmachine: (functional-907700) Calling .GetSSHKeyPath
I0414 14:33:53.273348 1863720 main.go:141] libmachine: (functional-907700) Calling .GetSSHUsername
I0414 14:33:53.273494 1863720 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/functional-907700/id_rsa Username:docker}
I0414 14:33:53.392636 1863720 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:33:54.001168 1863720 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.001181 1863720 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.001509 1863720 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.001533 1863720 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.001542 1863720 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.001550 1863720 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.001818 1863720 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.001837 1863720 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.001964 1863720 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907700 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-907700  | fbc1e2dcaa0cc | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| docker.io/library/nginx                 | latest             | 4cad75abc83d5 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-907700  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907700 image ls --format table --alsologtostderr:
I0414 14:33:54.564577 1863859 out.go:345] Setting OutFile to fd 1 ...
I0414 14:33:54.564680 1863859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.564688 1863859 out.go:358] Setting ErrFile to fd 2...
I0414 14:33:54.564693 1863859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.564865 1863859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
I0414 14:33:54.565460 1863859 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.565569 1863859 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.565938 1863859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.566005 1863859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.582067 1863859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
I0414 14:33:54.582625 1863859 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.583150 1863859 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.583172 1863859 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.583500 1863859 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.583692 1863859 main.go:141] libmachine: (functional-907700) Calling .GetState
I0414 14:33:54.585523 1863859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.585570 1863859 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.601815 1863859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
I0414 14:33:54.602324 1863859 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.602761 1863859 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.602788 1863859 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.603201 1863859 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.603388 1863859 main.go:141] libmachine: (functional-907700) Calling .DriverName
I0414 14:33:54.603599 1863859 ssh_runner.go:195] Run: systemctl --version
I0414 14:33:54.603628 1863859 main.go:141] libmachine: (functional-907700) Calling .GetSSHHostname
I0414 14:33:54.606464 1863859 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.606876 1863859 main.go:141] libmachine: (functional-907700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a3:7c", ip: ""} in network mk-functional-907700: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:26 +0000 UTC Type:0 Mac:52:54:00:9d:a3:7c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:functional-907700 Clientid:01:52:54:00:9d:a3:7c}
I0414 14:33:54.606912 1863859 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined IP address 192.168.39.222 and MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.607096 1863859 main.go:141] libmachine: (functional-907700) Calling .GetSSHPort
I0414 14:33:54.607288 1863859 main.go:141] libmachine: (functional-907700) Calling .GetSSHKeyPath
I0414 14:33:54.607468 1863859 main.go:141] libmachine: (functional-907700) Calling .GetSSHUsername
I0414 14:33:54.607609 1863859 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/functional-907700/id_rsa Username:docker}
I0414 14:33:54.689528 1863859 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:33:54.737792 1863859 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.737812 1863859 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.738100 1863859 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.738124 1863859 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.738152 1863859 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.738164 1863859 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.738472 1863859 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.738556 1863859 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.738509 1863859 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907700 image ls --format json --alsologtostderr:
[{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4
dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450
f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b81
56d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485","repoDigests":["docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab","docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca"],"repoTags":["docker.io/library/nginx:latest"],"size":"196210580"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-907700"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["reg
istry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"fbc1e2dcaa0cc4847817ea3ce70742c1b21fd60ac5212be27fa61b12c3505e13","repoDigests":["localhost/minikube-local-cache-test@sha256:068df78460aa510204198efa2822f6f8c77b238b00116fad36f67c08b12925e0"],"repoTags":["localhost/minikube-local-cache-test:functional-907700"],"size":"3330"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s
.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458
e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907700 image ls --format json --alsologtostderr:
I0414 14:33:54.325637 1863835 out.go:345] Setting OutFile to fd 1 ...
I0414 14:33:54.325993 1863835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.326008 1863835 out.go:358] Setting ErrFile to fd 2...
I0414 14:33:54.326016 1863835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.326253 1863835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
I0414 14:33:54.327118 1863835 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.327269 1863835 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.327723 1863835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.327799 1863835 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.344811 1863835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
I0414 14:33:54.345470 1863835 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.346073 1863835 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.346103 1863835 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.346492 1863835 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.346766 1863835 main.go:141] libmachine: (functional-907700) Calling .GetState
I0414 14:33:54.349161 1863835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.349225 1863835 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.366151 1863835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
I0414 14:33:54.366724 1863835 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.367216 1863835 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.367237 1863835 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.367553 1863835 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.367744 1863835 main.go:141] libmachine: (functional-907700) Calling .DriverName
I0414 14:33:54.367987 1863835 ssh_runner.go:195] Run: systemctl --version
I0414 14:33:54.368020 1863835 main.go:141] libmachine: (functional-907700) Calling .GetSSHHostname
I0414 14:33:54.371323 1863835 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.371729 1863835 main.go:141] libmachine: (functional-907700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a3:7c", ip: ""} in network mk-functional-907700: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:26 +0000 UTC Type:0 Mac:52:54:00:9d:a3:7c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:functional-907700 Clientid:01:52:54:00:9d:a3:7c}
I0414 14:33:54.371787 1863835 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined IP address 192.168.39.222 and MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.371892 1863835 main.go:141] libmachine: (functional-907700) Calling .GetSSHPort
I0414 14:33:54.372087 1863835 main.go:141] libmachine: (functional-907700) Calling .GetSSHKeyPath
I0414 14:33:54.372258 1863835 main.go:141] libmachine: (functional-907700) Calling .GetSSHUsername
I0414 14:33:54.372415 1863835 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/functional-907700/id_rsa Username:docker}
I0414 14:33:54.460485 1863835 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:33:54.510613 1863835 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.510625 1863835 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.510980 1863835 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
I0414 14:33:54.510980 1863835 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.511011 1863835 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.511024 1863835 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.511033 1863835 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.511337 1863835 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
I0414 14:33:54.511350 1863835 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.511368 1863835 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907700 image ls --format yaml --alsologtostderr:
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fbc1e2dcaa0cc4847817ea3ce70742c1b21fd60ac5212be27fa61b12c3505e13
repoDigests:
- localhost/minikube-local-cache-test@sha256:068df78460aa510204198efa2822f6f8c77b238b00116fad36f67c08b12925e0
repoTags:
- localhost/minikube-local-cache-test:functional-907700
size: "3330"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-907700
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 4cad75abc83d5ca6ee22053d85850676eaef657ee9d723d7bef61179e1e1e485
repoDigests:
- docker.io/library/nginx@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
- docker.io/library/nginx@sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca
repoTags:
- docker.io/library/nginx:latest
size: "196210580"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907700 image ls --format yaml --alsologtostderr:
I0414 14:33:54.058853 1863811 out.go:345] Setting OutFile to fd 1 ...
I0414 14:33:54.058962 1863811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.058973 1863811 out.go:358] Setting ErrFile to fd 2...
I0414 14:33:54.058981 1863811 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:54.059208 1863811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
I0414 14:33:54.059821 1863811 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.059923 1863811 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:54.060293 1863811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.060361 1863811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.078651 1863811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46271
I0414 14:33:54.079209 1863811 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.079744 1863811 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.079769 1863811 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.080215 1863811 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.080450 1863811 main.go:141] libmachine: (functional-907700) Calling .GetState
I0414 14:33:54.082448 1863811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:54.082496 1863811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:54.098962 1863811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
I0414 14:33:54.099445 1863811 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:54.099947 1863811 main.go:141] libmachine: Using API Version  1
I0414 14:33:54.099972 1863811 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:54.100317 1863811 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:54.100488 1863811 main.go:141] libmachine: (functional-907700) Calling .DriverName
I0414 14:33:54.100706 1863811 ssh_runner.go:195] Run: systemctl --version
I0414 14:33:54.100737 1863811 main.go:141] libmachine: (functional-907700) Calling .GetSSHHostname
I0414 14:33:54.103958 1863811 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.104379 1863811 main.go:141] libmachine: (functional-907700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a3:7c", ip: ""} in network mk-functional-907700: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:26 +0000 UTC Type:0 Mac:52:54:00:9d:a3:7c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:functional-907700 Clientid:01:52:54:00:9d:a3:7c}
I0414 14:33:54.104428 1863811 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined IP address 192.168.39.222 and MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:54.104548 1863811 main.go:141] libmachine: (functional-907700) Calling .GetSSHPort
I0414 14:33:54.104751 1863811 main.go:141] libmachine: (functional-907700) Calling .GetSSHKeyPath
I0414 14:33:54.104934 1863811 main.go:141] libmachine: (functional-907700) Calling .GetSSHUsername
I0414 14:33:54.105080 1863811 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/functional-907700/id_rsa Username:docker}
I0414 14:33:54.209682 1863811 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 14:33:54.264088 1863811 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.264107 1863811 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.264465 1863811 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.264498 1863811 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:54.264508 1863811 main.go:141] libmachine: Making call to close driver server
I0414 14:33:54.264515 1863811 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
I0414 14:33:54.264516 1863811 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:54.264836 1863811 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
I0414 14:33:54.264859 1863811 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:54.264900 1863811 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh pgrep buildkitd: exit status 1 (291.964688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image build -t localhost/my-image:functional-907700 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 image build -t localhost/my-image:functional-907700 testdata/build --alsologtostderr: (2.904685167s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907700 image build -t localhost/my-image:functional-907700 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dcd68a0e35c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-907700
--> b8f63059890
Successfully tagged localhost/my-image:functional-907700
b8f63059890ce9f1396d13c12a8a0167aa6c5763b13282006f14481833e0908a
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907700 image build -t localhost/my-image:functional-907700 testdata/build --alsologtostderr:
I0414 14:33:53.609540 1863787 out.go:345] Setting OutFile to fd 1 ...
I0414 14:33:53.609854 1863787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:53.609866 1863787 out.go:358] Setting ErrFile to fd 2...
I0414 14:33:53.609872 1863787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:33:53.610074 1863787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
I0414 14:33:53.610747 1863787 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:53.611437 1863787 config.go:182] Loaded profile config "functional-907700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 14:33:53.611787 1863787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:53.611835 1863787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:53.628581 1863787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
I0414 14:33:53.629117 1863787 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:53.629662 1863787 main.go:141] libmachine: Using API Version  1
I0414 14:33:53.629684 1863787 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:53.630140 1863787 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:53.630348 1863787 main.go:141] libmachine: (functional-907700) Calling .GetState
I0414 14:33:53.632392 1863787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 14:33:53.632441 1863787 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:33:53.649190 1863787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
I0414 14:33:53.649846 1863787 main.go:141] libmachine: () Calling .GetVersion
I0414 14:33:53.650448 1863787 main.go:141] libmachine: Using API Version  1
I0414 14:33:53.650474 1863787 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:33:53.650856 1863787 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:33:53.651060 1863787 main.go:141] libmachine: (functional-907700) Calling .DriverName
I0414 14:33:53.651307 1863787 ssh_runner.go:195] Run: systemctl --version
I0414 14:33:53.651334 1863787 main.go:141] libmachine: (functional-907700) Calling .GetSSHHostname
I0414 14:33:53.654528 1863787 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:53.655004 1863787 main.go:141] libmachine: (functional-907700) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:a3:7c", ip: ""} in network mk-functional-907700: {Iface:virbr1 ExpiryTime:2025-04-14 15:25:26 +0000 UTC Type:0 Mac:52:54:00:9d:a3:7c Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:functional-907700 Clientid:01:52:54:00:9d:a3:7c}
I0414 14:33:53.655035 1863787 main.go:141] libmachine: (functional-907700) DBG | domain functional-907700 has defined IP address 192.168.39.222 and MAC address 52:54:00:9d:a3:7c in network mk-functional-907700
I0414 14:33:53.655159 1863787 main.go:141] libmachine: (functional-907700) Calling .GetSSHPort
I0414 14:33:53.655360 1863787 main.go:141] libmachine: (functional-907700) Calling .GetSSHKeyPath
I0414 14:33:53.655528 1863787 main.go:141] libmachine: (functional-907700) Calling .GetSSHUsername
I0414 14:33:53.655663 1863787 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/functional-907700/id_rsa Username:docker}
I0414 14:33:53.738202 1863787 build_images.go:161] Building image from path: /tmp/build.2034806659.tar
I0414 14:33:53.738312 1863787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 14:33:53.750735 1863787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2034806659.tar
I0414 14:33:53.755993 1863787 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2034806659.tar: stat -c "%s %y" /var/lib/minikube/build/build.2034806659.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2034806659.tar': No such file or directory
I0414 14:33:53.756031 1863787 ssh_runner.go:362] scp /tmp/build.2034806659.tar --> /var/lib/minikube/build/build.2034806659.tar (3072 bytes)
I0414 14:33:53.786403 1863787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2034806659
I0414 14:33:53.797697 1863787 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2034806659 -xf /var/lib/minikube/build/build.2034806659.tar
I0414 14:33:53.813745 1863787 crio.go:315] Building image: /var/lib/minikube/build/build.2034806659
I0414 14:33:53.813832 1863787 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-907700 /var/lib/minikube/build/build.2034806659 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 14:33:56.432249 1863787 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-907700 /var/lib/minikube/build/build.2034806659 --cgroup-manager=cgroupfs: (2.618379113s)
I0414 14:33:56.432353 1863787 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2034806659
I0414 14:33:56.447871 1863787 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2034806659.tar
I0414 14:33:56.458189 1863787 build_images.go:217] Built localhost/my-image:functional-907700 from /tmp/build.2034806659.tar
I0414 14:33:56.458240 1863787 build_images.go:133] succeeded building to: functional-907700
I0414 14:33:56.458247 1863787 build_images.go:134] failed building to: 
I0414 14:33:56.458277 1863787 main.go:141] libmachine: Making call to close driver server
I0414 14:33:56.458286 1863787 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:56.458651 1863787 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:56.458674 1863787 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 14:33:56.458682 1863787 main.go:141] libmachine: Making call to close driver server
I0414 14:33:56.458682 1863787 main.go:141] libmachine: (functional-907700) DBG | Closing plugin on server side
I0414 14:33:56.458690 1863787 main.go:141] libmachine: (functional-907700) Calling .Close
I0414 14:33:56.458949 1863787 main.go:141] libmachine: Successfully made call to close driver server
I0414 14:33:56.458967 1863787 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-907700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image load --daemon kicbase/echo-server:functional-907700 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 image load --daemon kicbase/echo-server:functional-907700 --alsologtostderr: (1.506853065s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "318.468551ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "55.178434ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "351.639671ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "55.191824ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-907700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-907700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-xwffv" [4d71cd34-3c41-48de-a051-15dc9d4d76bc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-xwffv" [4d71cd34-3c41-48de-a051-15dc9d4d76bc] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004233009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image load --daemon kicbase/echo-server:functional-907700 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-907700
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image load --daemon kicbase/echo-server:functional-907700 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-907700 image load --daemon kicbase/echo-server:functional-907700 --alsologtostderr: (1.686317803s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image save kicbase/echo-server:functional-907700 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image rm kicbase/echo-server:functional-907700 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-907700
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 image save --daemon kicbase/echo-server:functional-907700 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-907700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service list -o json
functional_test.go:1511: Took "527.384057ms" to run "out/minikube-linux-amd64 -p functional-907700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdany-port1505488306/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744641211713350090" to /tmp/TestFunctionalparallelMountCmdany-port1505488306/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744641211713350090" to /tmp/TestFunctionalparallelMountCmdany-port1505488306/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744641211713350090" to /tmp/TestFunctionalparallelMountCmdany-port1505488306/001/test-1744641211713350090
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.00893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 14:33:31.992673 1853270 retry.go:31] will retry after 376.442425ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 14:33 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 14:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 14:33 test-1744641211713350090
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh cat /mount-9p/test-1744641211713350090
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-907700 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [af00d607-95aa-4fe7-a8e1-61b11f77af4c] Pending
helpers_test.go:344: "busybox-mount" [af00d607-95aa-4fe7-a8e1-61b11f77af4c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [af00d607-95aa-4fe7-a8e1-61b11f77af4c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [af00d607-95aa-4fe7-a8e1-61b11f77af4c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 15.008806861s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-907700 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdany-port1505488306/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.222:30952
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.222:30952
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdspecific-port569867276/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdspecific-port569867276/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh "sudo umount -f /mount-9p": exit status 1 (259.154527ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-907700 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdspecific-port569867276/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T" /mount1: exit status 1 (305.221069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 14:33:51.304042 1853270 retry.go:31] will retry after 316.874012ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907700 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-907700 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907700 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2417866325/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-907700
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-907700
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-907700
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-010817 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 14:34:31.482687 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-010817 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.08408645s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-010817 -- rollout status deployment/busybox: (4.144812352s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-5kmnx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-h55n7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-rjv4k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-5kmnx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-h55n7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-rjv4k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-5kmnx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-h55n7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-rjv4k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-5kmnx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-5kmnx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-h55n7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-h55n7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-rjv4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-010817 -- exec busybox-58667487b6-rjv4k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-010817 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-010817 -v=7 --alsologtostderr: (51.823044398s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-010817 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0414 14:38:18.359995 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.366543 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.378008 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.399523 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.440989 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.522497 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:18.684080 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:19.005543 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status --output json -v=7 --alsologtostderr
E0414 14:38:19.647645 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp testdata/cp-test.txt ha-010817:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2043609384/001/cp-test_ha-010817.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test.txt"
E0414 14:38:20.929404 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817:/home/docker/cp-test.txt ha-010817-m02:/home/docker/cp-test_ha-010817_ha-010817-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test_ha-010817_ha-010817-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817:/home/docker/cp-test.txt ha-010817-m03:/home/docker/cp-test_ha-010817_ha-010817-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test_ha-010817_ha-010817-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817:/home/docker/cp-test.txt ha-010817-m04:/home/docker/cp-test_ha-010817_ha-010817-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test_ha-010817_ha-010817-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp testdata/cp-test.txt ha-010817-m02:/home/docker/cp-test.txt
E0414 14:38:23.491218 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2043609384/001/cp-test_ha-010817-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m02:/home/docker/cp-test.txt ha-010817:/home/docker/cp-test_ha-010817-m02_ha-010817.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test_ha-010817-m02_ha-010817.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m02:/home/docker/cp-test.txt ha-010817-m03:/home/docker/cp-test_ha-010817-m02_ha-010817-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test_ha-010817-m02_ha-010817-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m02:/home/docker/cp-test.txt ha-010817-m04:/home/docker/cp-test_ha-010817-m02_ha-010817-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test_ha-010817-m02_ha-010817-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp testdata/cp-test.txt ha-010817-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2043609384/001/cp-test_ha-010817-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m03:/home/docker/cp-test.txt ha-010817:/home/docker/cp-test_ha-010817-m03_ha-010817.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test_ha-010817-m03_ha-010817.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m03:/home/docker/cp-test.txt ha-010817-m02:/home/docker/cp-test_ha-010817-m03_ha-010817-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test.txt"
E0414 14:38:28.612503 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test_ha-010817-m03_ha-010817-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m03:/home/docker/cp-test.txt ha-010817-m04:/home/docker/cp-test_ha-010817-m03_ha-010817-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test_ha-010817-m03_ha-010817-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp testdata/cp-test.txt ha-010817-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2043609384/001/cp-test_ha-010817-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m04:/home/docker/cp-test.txt ha-010817:/home/docker/cp-test_ha-010817-m04_ha-010817.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817 "sudo cat /home/docker/cp-test_ha-010817-m04_ha-010817.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m04:/home/docker/cp-test.txt ha-010817-m02:/home/docker/cp-test_ha-010817-m04_ha-010817-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m02 "sudo cat /home/docker/cp-test_ha-010817-m04_ha-010817-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 cp ha-010817-m04:/home/docker/cp-test.txt ha-010817-m03:/home/docker/cp-test_ha-010817-m04_ha-010817-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 ssh -n ha-010817-m03 "sudo cat /home/docker/cp-test_ha-010817-m04_ha-010817-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 node stop m02 -v=7 --alsologtostderr
E0414 14:38:38.854779 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:38:59.337171 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:31.482132 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:39:40.299106 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-010817 node stop m02 -v=7 --alsologtostderr: (1m30.681708011s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr: exit status 7 (688.199464ms)

                                                
                                                
-- stdout --
	ha-010817
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-010817-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-010817-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-010817-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:40:03.689643 1868413 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:40:03.689895 1868413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:40:03.689904 1868413 out.go:358] Setting ErrFile to fd 2...
	I0414 14:40:03.689908 1868413 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:40:03.690120 1868413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:40:03.690303 1868413 out.go:352] Setting JSON to false
	I0414 14:40:03.690340 1868413 mustload.go:65] Loading cluster: ha-010817
	I0414 14:40:03.690404 1868413 notify.go:220] Checking for updates...
	I0414 14:40:03.690791 1868413 config.go:182] Loaded profile config "ha-010817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:40:03.690818 1868413 status.go:174] checking status of ha-010817 ...
	I0414 14:40:03.691267 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.691321 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.710166 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I0414 14:40:03.710662 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.711271 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.711299 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.711713 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.711939 1868413 main.go:141] libmachine: (ha-010817) Calling .GetState
	I0414 14:40:03.713697 1868413 status.go:371] ha-010817 host status = "Running" (err=<nil>)
	I0414 14:40:03.713721 1868413 host.go:66] Checking if "ha-010817" exists ...
	I0414 14:40:03.714039 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.714080 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.730557 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0414 14:40:03.731150 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.731587 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.731611 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.731966 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.732166 1868413 main.go:141] libmachine: (ha-010817) Calling .GetIP
	I0414 14:40:03.735498 1868413 main.go:141] libmachine: (ha-010817) DBG | domain ha-010817 has defined MAC address 52:54:00:36:af:1d in network mk-ha-010817
	I0414 14:40:03.736031 1868413 main.go:141] libmachine: (ha-010817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:af:1d", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:34:13 +0000 UTC Type:0 Mac:52:54:00:36:af:1d Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-010817 Clientid:01:52:54:00:36:af:1d}
	I0414 14:40:03.736058 1868413 main.go:141] libmachine: (ha-010817) DBG | domain ha-010817 has defined IP address 192.168.39.4 and MAC address 52:54:00:36:af:1d in network mk-ha-010817
	I0414 14:40:03.736176 1868413 host.go:66] Checking if "ha-010817" exists ...
	I0414 14:40:03.736618 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.736678 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.753651 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0414 14:40:03.754193 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.754774 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.754801 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.755173 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.755376 1868413 main.go:141] libmachine: (ha-010817) Calling .DriverName
	I0414 14:40:03.755589 1868413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:40:03.755629 1868413 main.go:141] libmachine: (ha-010817) Calling .GetSSHHostname
	I0414 14:40:03.758867 1868413 main.go:141] libmachine: (ha-010817) DBG | domain ha-010817 has defined MAC address 52:54:00:36:af:1d in network mk-ha-010817
	I0414 14:40:03.759339 1868413 main.go:141] libmachine: (ha-010817) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:af:1d", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:34:13 +0000 UTC Type:0 Mac:52:54:00:36:af:1d Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-010817 Clientid:01:52:54:00:36:af:1d}
	I0414 14:40:03.759383 1868413 main.go:141] libmachine: (ha-010817) DBG | domain ha-010817 has defined IP address 192.168.39.4 and MAC address 52:54:00:36:af:1d in network mk-ha-010817
	I0414 14:40:03.759568 1868413 main.go:141] libmachine: (ha-010817) Calling .GetSSHPort
	I0414 14:40:03.759723 1868413 main.go:141] libmachine: (ha-010817) Calling .GetSSHKeyPath
	I0414 14:40:03.759840 1868413 main.go:141] libmachine: (ha-010817) Calling .GetSSHUsername
	I0414 14:40:03.760071 1868413 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/ha-010817/id_rsa Username:docker}
	I0414 14:40:03.847705 1868413 ssh_runner.go:195] Run: systemctl --version
	I0414 14:40:03.855777 1868413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:40:03.876639 1868413 kubeconfig.go:125] found "ha-010817" server: "https://192.168.39.254:8443"
	I0414 14:40:03.876699 1868413 api_server.go:166] Checking apiserver status ...
	I0414 14:40:03.876748 1868413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:40:03.899049 1868413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup
	W0414 14:40:03.909591 1868413 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:40:03.909670 1868413 ssh_runner.go:195] Run: ls
	I0414 14:40:03.915106 1868413 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:40:03.921618 1868413 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:40:03.921654 1868413 status.go:463] ha-010817 apiserver status = Running (err=<nil>)
	I0414 14:40:03.921675 1868413 status.go:176] ha-010817 status: &{Name:ha-010817 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:40:03.921711 1868413 status.go:174] checking status of ha-010817-m02 ...
	I0414 14:40:03.922100 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.922155 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.937963 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0414 14:40:03.938463 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.939042 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.939066 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.939479 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.939661 1868413 main.go:141] libmachine: (ha-010817-m02) Calling .GetState
	I0414 14:40:03.941455 1868413 status.go:371] ha-010817-m02 host status = "Stopped" (err=<nil>)
	I0414 14:40:03.941472 1868413 status.go:384] host is not running, skipping remaining checks
	I0414 14:40:03.941480 1868413 status.go:176] ha-010817-m02 status: &{Name:ha-010817-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:40:03.941502 1868413 status.go:174] checking status of ha-010817-m03 ...
	I0414 14:40:03.941847 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.941909 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.957626 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39245
	I0414 14:40:03.958060 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.958476 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.958499 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.958872 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.959077 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetState
	I0414 14:40:03.960785 1868413 status.go:371] ha-010817-m03 host status = "Running" (err=<nil>)
	I0414 14:40:03.960818 1868413 host.go:66] Checking if "ha-010817-m03" exists ...
	I0414 14:40:03.961109 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.961154 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:03.977285 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38477
	I0414 14:40:03.977730 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:03.978179 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:03.978202 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:03.978597 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:03.978793 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetIP
	I0414 14:40:03.981832 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | domain ha-010817-m03 has defined MAC address 52:54:00:b1:b3:f0 in network mk-ha-010817
	I0414 14:40:03.982231 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:b3:f0", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:36:16 +0000 UTC Type:0 Mac:52:54:00:b1:b3:f0 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:ha-010817-m03 Clientid:01:52:54:00:b1:b3:f0}
	I0414 14:40:03.982262 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | domain ha-010817-m03 has defined IP address 192.168.39.90 and MAC address 52:54:00:b1:b3:f0 in network mk-ha-010817
	I0414 14:40:03.982470 1868413 host.go:66] Checking if "ha-010817-m03" exists ...
	I0414 14:40:03.982904 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:03.982963 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:04.000273 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I0414 14:40:04.000802 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:04.001307 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:04.001331 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:04.001758 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:04.001963 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .DriverName
	I0414 14:40:04.002185 1868413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:40:04.002212 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetSSHHostname
	I0414 14:40:04.005538 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | domain ha-010817-m03 has defined MAC address 52:54:00:b1:b3:f0 in network mk-ha-010817
	I0414 14:40:04.006030 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:b3:f0", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:36:16 +0000 UTC Type:0 Mac:52:54:00:b1:b3:f0 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:ha-010817-m03 Clientid:01:52:54:00:b1:b3:f0}
	I0414 14:40:04.006057 1868413 main.go:141] libmachine: (ha-010817-m03) DBG | domain ha-010817-m03 has defined IP address 192.168.39.90 and MAC address 52:54:00:b1:b3:f0 in network mk-ha-010817
	I0414 14:40:04.006258 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetSSHPort
	I0414 14:40:04.006493 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetSSHKeyPath
	I0414 14:40:04.006686 1868413 main.go:141] libmachine: (ha-010817-m03) Calling .GetSSHUsername
	I0414 14:40:04.006811 1868413 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/ha-010817-m03/id_rsa Username:docker}
	I0414 14:40:04.089142 1868413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:40:04.110729 1868413 kubeconfig.go:125] found "ha-010817" server: "https://192.168.39.254:8443"
	I0414 14:40:04.110767 1868413 api_server.go:166] Checking apiserver status ...
	I0414 14:40:04.110800 1868413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 14:40:04.129135 1868413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup
	W0414 14:40:04.142652 1868413 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1477/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 14:40:04.142727 1868413 ssh_runner.go:195] Run: ls
	I0414 14:40:04.148001 1868413 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0414 14:40:04.152781 1868413 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0414 14:40:04.152809 1868413 status.go:463] ha-010817-m03 apiserver status = Running (err=<nil>)
	I0414 14:40:04.152831 1868413 status.go:176] ha-010817-m03 status: &{Name:ha-010817-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:40:04.152846 1868413 status.go:174] checking status of ha-010817-m04 ...
	I0414 14:40:04.153254 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:04.153307 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:04.169411 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0414 14:40:04.169958 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:04.170461 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:04.170493 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:04.170835 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:04.171029 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetState
	I0414 14:40:04.172652 1868413 status.go:371] ha-010817-m04 host status = "Running" (err=<nil>)
	I0414 14:40:04.172671 1868413 host.go:66] Checking if "ha-010817-m04" exists ...
	I0414 14:40:04.173141 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:04.173196 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:04.189392 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0414 14:40:04.189930 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:04.190650 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:04.190680 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:04.191050 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:04.191315 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetIP
	I0414 14:40:04.194171 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | domain ha-010817-m04 has defined MAC address 52:54:00:4f:a0:4c in network mk-ha-010817
	I0414 14:40:04.194598 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:a0:4c", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:37:41 +0000 UTC Type:0 Mac:52:54:00:4f:a0:4c Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-010817-m04 Clientid:01:52:54:00:4f:a0:4c}
	I0414 14:40:04.194628 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | domain ha-010817-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:4f:a0:4c in network mk-ha-010817
	I0414 14:40:04.194751 1868413 host.go:66] Checking if "ha-010817-m04" exists ...
	I0414 14:40:04.195048 1868413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:40:04.195088 1868413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:40:04.211105 1868413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0414 14:40:04.211551 1868413 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:40:04.212086 1868413 main.go:141] libmachine: Using API Version  1
	I0414 14:40:04.212115 1868413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:40:04.212523 1868413 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:40:04.212731 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .DriverName
	I0414 14:40:04.212948 1868413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 14:40:04.212979 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetSSHHostname
	I0414 14:40:04.216718 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | domain ha-010817-m04 has defined MAC address 52:54:00:4f:a0:4c in network mk-ha-010817
	I0414 14:40:04.217149 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:a0:4c", ip: ""} in network mk-ha-010817: {Iface:virbr1 ExpiryTime:2025-04-14 15:37:41 +0000 UTC Type:0 Mac:52:54:00:4f:a0:4c Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-010817-m04 Clientid:01:52:54:00:4f:a0:4c}
	I0414 14:40:04.217181 1868413 main.go:141] libmachine: (ha-010817-m04) DBG | domain ha-010817-m04 has defined IP address 192.168.39.63 and MAC address 52:54:00:4f:a0:4c in network mk-ha-010817
	I0414 14:40:04.217358 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetSSHPort
	I0414 14:40:04.217544 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetSSHKeyPath
	I0414 14:40:04.217711 1868413 main.go:141] libmachine: (ha-010817-m04) Calling .GetSSHUsername
	I0414 14:40:04.217832 1868413 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/ha-010817-m04/id_rsa Username:docker}
	I0414 14:40:04.304053 1868413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 14:40:04.322980 1868413 status.go:176] ha-010817-m04 status: &{Name:ha-010817-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 node start m02 -v=7 --alsologtostderr
E0414 14:40:54.562697 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-010817 node start m02 -v=7 --alsologtostderr: (55.837447712s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (56.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0414 14:41:02.221319 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (439.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-010817 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-010817 -v=7 --alsologtostderr
E0414 14:43:18.359405 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:43:46.062664 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:44:31.481725 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-010817 -v=7 --alsologtostderr: (4m33.870038808s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-010817 --wait=true -v=7 --alsologtostderr
E0414 14:48:18.359605 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-010817 --wait=true -v=7 --alsologtostderr: (2m45.754455207s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-010817
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (439.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-010817 node delete m03 -v=7 --alsologtostderr: (17.601622511s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 stop -v=7 --alsologtostderr
E0414 14:49:31.482106 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-010817 stop -v=7 --alsologtostderr: (4m32.279355572s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr: exit status 7 (114.926212ms)

                                                
                                                
-- stdout --
	ha-010817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-010817-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-010817-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 14:53:13.927802 1872672 out.go:345] Setting OutFile to fd 1 ...
	I0414 14:53:13.927942 1872672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:53:13.927954 1872672 out.go:358] Setting ErrFile to fd 2...
	I0414 14:53:13.927957 1872672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 14:53:13.928169 1872672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 14:53:13.928380 1872672 out.go:352] Setting JSON to false
	I0414 14:53:13.928419 1872672 mustload.go:65] Loading cluster: ha-010817
	I0414 14:53:13.928545 1872672 notify.go:220] Checking for updates...
	I0414 14:53:13.928930 1872672 config.go:182] Loaded profile config "ha-010817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 14:53:13.928960 1872672 status.go:174] checking status of ha-010817 ...
	I0414 14:53:13.929417 1872672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:53:13.929482 1872672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:53:13.947723 1872672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0414 14:53:13.948203 1872672 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:53:13.949002 1872672 main.go:141] libmachine: Using API Version  1
	I0414 14:53:13.949044 1872672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:53:13.949463 1872672 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:53:13.949699 1872672 main.go:141] libmachine: (ha-010817) Calling .GetState
	I0414 14:53:13.951526 1872672 status.go:371] ha-010817 host status = "Stopped" (err=<nil>)
	I0414 14:53:13.951544 1872672 status.go:384] host is not running, skipping remaining checks
	I0414 14:53:13.951550 1872672 status.go:176] ha-010817 status: &{Name:ha-010817 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:53:13.951574 1872672 status.go:174] checking status of ha-010817-m02 ...
	I0414 14:53:13.951912 1872672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:53:13.951968 1872672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:53:13.967779 1872672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0414 14:53:13.968258 1872672 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:53:13.968686 1872672 main.go:141] libmachine: Using API Version  1
	I0414 14:53:13.968706 1872672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:53:13.969053 1872672 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:53:13.969206 1872672 main.go:141] libmachine: (ha-010817-m02) Calling .GetState
	I0414 14:53:13.970882 1872672 status.go:371] ha-010817-m02 host status = "Stopped" (err=<nil>)
	I0414 14:53:13.970899 1872672 status.go:384] host is not running, skipping remaining checks
	I0414 14:53:13.970905 1872672 status.go:176] ha-010817-m02 status: &{Name:ha-010817-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 14:53:13.970922 1872672 status.go:174] checking status of ha-010817-m04 ...
	I0414 14:53:13.971205 1872672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 14:53:13.971249 1872672 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 14:53:13.986786 1872672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0414 14:53:13.987298 1872672 main.go:141] libmachine: () Calling .GetVersion
	I0414 14:53:13.987776 1872672 main.go:141] libmachine: Using API Version  1
	I0414 14:53:13.987801 1872672 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 14:53:13.988169 1872672 main.go:141] libmachine: () Calling .GetMachineName
	I0414 14:53:13.988365 1872672 main.go:141] libmachine: (ha-010817-m04) Calling .GetState
	I0414 14:53:13.989937 1872672 status.go:371] ha-010817-m04 host status = "Stopped" (err=<nil>)
	I0414 14:53:13.989953 1872672 status.go:384] host is not running, skipping remaining checks
	I0414 14:53:13.989958 1872672 status.go:176] ha-010817-m04 status: &{Name:ha-010817-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (133.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-010817 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 14:53:18.359153 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:31.481731 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 14:54:41.426272 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-010817 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m12.286203428s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (133.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-010817 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-010817 --control-plane -v=7 --alsologtostderr: (1m17.261929655s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-010817 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-255882 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0414 14:57:34.566353 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-255882 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.670149916s)
--- PASS: TestJSONOutput/start/Command (82.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-255882 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-255882 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-255882 --output=json --user=testUser
E0414 14:58:18.362500 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-255882 --output=json --user=testUser: (7.401494747s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-171526 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-171526 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.156046ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a313182c-d5e5-469e-aa2a-08d772bb372b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-171526] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e555201-04aa-455f-8a50-daf266b078c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20512"}}
	{"specversion":"1.0","id":"3c6a3b5d-ae4c-4bf0-9ee7-424856f45d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"522db86b-8d2f-43af-8889-a359e0716f4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig"}}
	{"specversion":"1.0","id":"f53fdb13-0d17-4386-8098-07b66dd147ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube"}}
	{"specversion":"1.0","id":"a6beaa26-32a9-4b3e-901e-363caf867bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"865f8d8e-5a91-4a45-8cb0-0f002efc98fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3383d50a-ad7d-424f-a3a9-5e68784034d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-171526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-171526
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-935344 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-935344 --driver=kvm2  --container-runtime=crio: (42.697998621s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-948315 --driver=kvm2  --container-runtime=crio
E0414 14:59:31.482593 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-948315 --driver=kvm2  --container-runtime=crio: (44.708479513s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-935344
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-948315
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-948315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-948315
helpers_test.go:175: Cleaning up "first-935344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-935344
--- PASS: TestMinikubeProfile (90.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-614829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-614829 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.878632602s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-614829 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-614829 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-632369 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-632369 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.26218196s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-614829 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-632369
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-632369: (1.538318257s)
--- PASS: TestMountStart/serial/Stop (1.54s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-632369
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-632369: (20.801745596s)
--- PASS: TestMountStart/serial/RestartStopped (21.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-632369 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981731 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.077577787s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-981731 -- rollout status deployment/busybox: (3.268561535s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- nslookup kubernetes.io: (1.213569062s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-vbnkh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-vbnkh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-vbnkh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-98g9k -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-vbnkh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-981731 -- exec busybox-58667487b6-vbnkh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (78.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-981731 -v 3 --alsologtostderr
E0414 15:03:18.359629 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:04:31.482232 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-981731 -v 3 --alsologtostderr: (1m18.124943494s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (78.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-981731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp testdata/cp-test.txt multinode-981731:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2522048873/001/cp-test_multinode-981731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731:/home/docker/cp-test.txt multinode-981731-m02:/home/docker/cp-test_multinode-981731_multinode-981731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test_multinode-981731_multinode-981731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731:/home/docker/cp-test.txt multinode-981731-m03:/home/docker/cp-test_multinode-981731_multinode-981731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test_multinode-981731_multinode-981731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp testdata/cp-test.txt multinode-981731-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2522048873/001/cp-test_multinode-981731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m02:/home/docker/cp-test.txt multinode-981731:/home/docker/cp-test_multinode-981731-m02_multinode-981731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test_multinode-981731-m02_multinode-981731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m02:/home/docker/cp-test.txt multinode-981731-m03:/home/docker/cp-test_multinode-981731-m02_multinode-981731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test_multinode-981731-m02_multinode-981731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp testdata/cp-test.txt multinode-981731-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2522048873/001/cp-test_multinode-981731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m03:/home/docker/cp-test.txt multinode-981731:/home/docker/cp-test_multinode-981731-m03_multinode-981731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731 "sudo cat /home/docker/cp-test_multinode-981731-m03_multinode-981731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 cp multinode-981731-m03:/home/docker/cp-test.txt multinode-981731-m02:/home/docker/cp-test_multinode-981731-m03_multinode-981731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 ssh -n multinode-981731-m02 "sudo cat /home/docker/cp-test_multinode-981731-m03_multinode-981731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-981731 node stop m03: (1.440307509s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981731 status: exit status 7 (448.768594ms)

                                                
                                                
-- stdout --
	multinode-981731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-981731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-981731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr: exit status 7 (451.881386ms)

                                                
                                                
-- stdout --
	multinode-981731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-981731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-981731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:04:47.199414 1880431 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:04:47.199659 1880431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:04:47.199667 1880431 out.go:358] Setting ErrFile to fd 2...
	I0414 15:04:47.199671 1880431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:04:47.199863 1880431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:04:47.200023 1880431 out.go:352] Setting JSON to false
	I0414 15:04:47.200052 1880431 mustload.go:65] Loading cluster: multinode-981731
	I0414 15:04:47.200112 1880431 notify.go:220] Checking for updates...
	I0414 15:04:47.200421 1880431 config.go:182] Loaded profile config "multinode-981731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:04:47.200449 1880431 status.go:174] checking status of multinode-981731 ...
	I0414 15:04:47.201079 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.201151 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.218210 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0414 15:04:47.218826 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.219351 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.219372 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.219892 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.220104 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetState
	I0414 15:04:47.222053 1880431 status.go:371] multinode-981731 host status = "Running" (err=<nil>)
	I0414 15:04:47.222073 1880431 host.go:66] Checking if "multinode-981731" exists ...
	I0414 15:04:47.222422 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.222486 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.239930 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33029
	I0414 15:04:47.240365 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.240933 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.240968 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.241347 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.241570 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetIP
	I0414 15:04:47.244566 1880431 main.go:141] libmachine: (multinode-981731) DBG | domain multinode-981731 has defined MAC address 52:54:00:36:1d:8c in network mk-multinode-981731
	I0414 15:04:47.244974 1880431 main.go:141] libmachine: (multinode-981731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:1d:8c", ip: ""} in network mk-multinode-981731: {Iface:virbr1 ExpiryTime:2025-04-14 16:01:31 +0000 UTC Type:0 Mac:52:54:00:36:1d:8c Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-981731 Clientid:01:52:54:00:36:1d:8c}
	I0414 15:04:47.245006 1880431 main.go:141] libmachine: (multinode-981731) DBG | domain multinode-981731 has defined IP address 192.168.39.173 and MAC address 52:54:00:36:1d:8c in network mk-multinode-981731
	I0414 15:04:47.245156 1880431 host.go:66] Checking if "multinode-981731" exists ...
	I0414 15:04:47.245481 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.245538 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.262700 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
	I0414 15:04:47.263171 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.263642 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.263670 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.264030 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.264219 1880431 main.go:141] libmachine: (multinode-981731) Calling .DriverName
	I0414 15:04:47.264411 1880431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 15:04:47.264440 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetSSHHostname
	I0414 15:04:47.267353 1880431 main.go:141] libmachine: (multinode-981731) DBG | domain multinode-981731 has defined MAC address 52:54:00:36:1d:8c in network mk-multinode-981731
	I0414 15:04:47.267770 1880431 main.go:141] libmachine: (multinode-981731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:1d:8c", ip: ""} in network mk-multinode-981731: {Iface:virbr1 ExpiryTime:2025-04-14 16:01:31 +0000 UTC Type:0 Mac:52:54:00:36:1d:8c Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-981731 Clientid:01:52:54:00:36:1d:8c}
	I0414 15:04:47.267818 1880431 main.go:141] libmachine: (multinode-981731) DBG | domain multinode-981731 has defined IP address 192.168.39.173 and MAC address 52:54:00:36:1d:8c in network mk-multinode-981731
	I0414 15:04:47.268023 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetSSHPort
	I0414 15:04:47.268240 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetSSHKeyPath
	I0414 15:04:47.268418 1880431 main.go:141] libmachine: (multinode-981731) Calling .GetSSHUsername
	I0414 15:04:47.268565 1880431 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/multinode-981731/id_rsa Username:docker}
	I0414 15:04:47.350608 1880431 ssh_runner.go:195] Run: systemctl --version
	I0414 15:04:47.357592 1880431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:04:47.373809 1880431 kubeconfig.go:125] found "multinode-981731" server: "https://192.168.39.173:8443"
	I0414 15:04:47.373854 1880431 api_server.go:166] Checking apiserver status ...
	I0414 15:04:47.373898 1880431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 15:04:47.390124 1880431 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1057/cgroup
	W0414 15:04:47.401442 1880431 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1057/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0414 15:04:47.401524 1880431 ssh_runner.go:195] Run: ls
	I0414 15:04:47.406573 1880431 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0414 15:04:47.411287 1880431 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0414 15:04:47.411317 1880431 status.go:463] multinode-981731 apiserver status = Running (err=<nil>)
	I0414 15:04:47.411328 1880431 status.go:176] multinode-981731 status: &{Name:multinode-981731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:04:47.411362 1880431 status.go:174] checking status of multinode-981731-m02 ...
	I0414 15:04:47.411746 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.411802 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.429156 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0414 15:04:47.429702 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.430140 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.430160 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.430550 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.430765 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetState
	I0414 15:04:47.432383 1880431 status.go:371] multinode-981731-m02 host status = "Running" (err=<nil>)
	I0414 15:04:47.432404 1880431 host.go:66] Checking if "multinode-981731-m02" exists ...
	I0414 15:04:47.432755 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.432807 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.450068 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0414 15:04:47.450599 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.451108 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.451130 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.451501 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.451728 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetIP
	I0414 15:04:47.454578 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | domain multinode-981731-m02 has defined MAC address 52:54:00:2a:f7:f9 in network mk-multinode-981731
	I0414 15:04:47.455028 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:f7:f9", ip: ""} in network mk-multinode-981731: {Iface:virbr1 ExpiryTime:2025-04-14 16:02:36 +0000 UTC Type:0 Mac:52:54:00:2a:f7:f9 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-981731-m02 Clientid:01:52:54:00:2a:f7:f9}
	I0414 15:04:47.455060 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | domain multinode-981731-m02 has defined IP address 192.168.39.116 and MAC address 52:54:00:2a:f7:f9 in network mk-multinode-981731
	I0414 15:04:47.455205 1880431 host.go:66] Checking if "multinode-981731-m02" exists ...
	I0414 15:04:47.455711 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.455766 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.472711 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0414 15:04:47.473299 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.473847 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.473872 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.474239 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.474492 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .DriverName
	I0414 15:04:47.474714 1880431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 15:04:47.474737 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetSSHHostname
	I0414 15:04:47.477915 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | domain multinode-981731-m02 has defined MAC address 52:54:00:2a:f7:f9 in network mk-multinode-981731
	I0414 15:04:47.478358 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2a:f7:f9", ip: ""} in network mk-multinode-981731: {Iface:virbr1 ExpiryTime:2025-04-14 16:02:36 +0000 UTC Type:0 Mac:52:54:00:2a:f7:f9 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:multinode-981731-m02 Clientid:01:52:54:00:2a:f7:f9}
	I0414 15:04:47.478415 1880431 main.go:141] libmachine: (multinode-981731-m02) DBG | domain multinode-981731-m02 has defined IP address 192.168.39.116 and MAC address 52:54:00:2a:f7:f9 in network mk-multinode-981731
	I0414 15:04:47.478631 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetSSHPort
	I0414 15:04:47.478853 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetSSHKeyPath
	I0414 15:04:47.478992 1880431 main.go:141] libmachine: (multinode-981731-m02) Calling .GetSSHUsername
	I0414 15:04:47.479155 1880431 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-1845971/.minikube/machines/multinode-981731-m02/id_rsa Username:docker}
	I0414 15:04:47.562331 1880431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 15:04:47.577635 1880431 status.go:176] multinode-981731-m02 status: &{Name:multinode-981731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:04:47.577689 1880431 status.go:174] checking status of multinode-981731-m03 ...
	I0414 15:04:47.578048 1880431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:04:47.578104 1880431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:04:47.596093 1880431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0414 15:04:47.596569 1880431 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:04:47.597031 1880431 main.go:141] libmachine: Using API Version  1
	I0414 15:04:47.597053 1880431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:04:47.597404 1880431 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:04:47.597590 1880431 main.go:141] libmachine: (multinode-981731-m03) Calling .GetState
	I0414 15:04:47.599253 1880431 status.go:371] multinode-981731-m03 host status = "Stopped" (err=<nil>)
	I0414 15:04:47.599273 1880431 status.go:384] host is not running, skipping remaining checks
	I0414 15:04:47.599280 1880431 status.go:176] multinode-981731-m03 status: &{Name:multinode-981731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-981731 node start m03 -v=7 --alsologtostderr: (37.832127525s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (339.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981731
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-981731
E0414 15:08:18.365246 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-981731: (3m3.067411564s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981731 --wait=true -v=8 --alsologtostderr
E0414 15:09:31.482294 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981731 --wait=true -v=8 --alsologtostderr: (2m36.584270443s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (339.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-981731 node delete m03: (2.162470037s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 stop
E0414 15:11:21.430730 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:13:18.365700 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-981731 stop: (3m1.668416789s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981731 status: exit status 7 (102.301918ms)

                                                
                                                
-- stdout --
	multinode-981731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-981731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr: exit status 7 (100.768211ms)

                                                
                                                
-- stdout --
	multinode-981731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-981731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:14:10.416962 1883436 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:14:10.417225 1883436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:14:10.417233 1883436 out.go:358] Setting ErrFile to fd 2...
	I0414 15:14:10.417237 1883436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:14:10.417421 1883436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:14:10.417619 1883436 out.go:352] Setting JSON to false
	I0414 15:14:10.417663 1883436 mustload.go:65] Loading cluster: multinode-981731
	I0414 15:14:10.417816 1883436 notify.go:220] Checking for updates...
	I0414 15:14:10.418101 1883436 config.go:182] Loaded profile config "multinode-981731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:14:10.418126 1883436 status.go:174] checking status of multinode-981731 ...
	I0414 15:14:10.418608 1883436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:14:10.418667 1883436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:14:10.439979 1883436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0414 15:14:10.440644 1883436 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:14:10.441187 1883436 main.go:141] libmachine: Using API Version  1
	I0414 15:14:10.441209 1883436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:14:10.441729 1883436 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:14:10.441982 1883436 main.go:141] libmachine: (multinode-981731) Calling .GetState
	I0414 15:14:10.443913 1883436 status.go:371] multinode-981731 host status = "Stopped" (err=<nil>)
	I0414 15:14:10.443935 1883436 status.go:384] host is not running, skipping remaining checks
	I0414 15:14:10.443943 1883436 status.go:176] multinode-981731 status: &{Name:multinode-981731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 15:14:10.443986 1883436 status.go:174] checking status of multinode-981731-m02 ...
	I0414 15:14:10.444445 1883436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0414 15:14:10.444500 1883436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0414 15:14:10.461078 1883436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0414 15:14:10.461630 1883436 main.go:141] libmachine: () Calling .GetVersion
	I0414 15:14:10.462136 1883436 main.go:141] libmachine: Using API Version  1
	I0414 15:14:10.462163 1883436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0414 15:14:10.462480 1883436 main.go:141] libmachine: () Calling .GetMachineName
	I0414 15:14:10.462728 1883436 main.go:141] libmachine: (multinode-981731-m02) Calling .GetState
	I0414 15:14:10.464600 1883436 status.go:371] multinode-981731-m02 host status = "Stopped" (err=<nil>)
	I0414 15:14:10.464618 1883436 status.go:384] host is not running, skipping remaining checks
	I0414 15:14:10.464625 1883436 status.go:176] multinode-981731-m02 status: &{Name:multinode-981731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (112.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981731 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0414 15:14:14.568033 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:14:31.481929 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981731 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.150755708s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-981731 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (112.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-981731
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981731-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-981731-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.353631ms)

                                                
                                                
-- stdout --
	* [multinode-981731-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-981731-m02' is duplicated with machine name 'multinode-981731-m02' in profile 'multinode-981731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-981731-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-981731-m03 --driver=kvm2  --container-runtime=crio: (43.097949212s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-981731
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-981731: exit status 80 (223.363455ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-981731 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-981731-m03 already exists in multinode-981731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-981731-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.38s)

                                                
                                    
x
+
TestScheduledStopUnix (116.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-820952 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-820952 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.098520439s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820952 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-820952 -n scheduled-stop-820952
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820952 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 15:20:58.361358 1853270 retry.go:31] will retry after 126.415µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.362525 1853270 retry.go:31] will retry after 122.198µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.363684 1853270 retry.go:31] will retry after 113.54µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.364837 1853270 retry.go:31] will retry after 262.887µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.366009 1853270 retry.go:31] will retry after 333.325µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.367169 1853270 retry.go:31] will retry after 905.177µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.368318 1853270 retry.go:31] will retry after 848.386µs: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.369448 1853270 retry.go:31] will retry after 1.867935ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.371700 1853270 retry.go:31] will retry after 3.228259ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.375960 1853270 retry.go:31] will retry after 2.604072ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.379236 1853270 retry.go:31] will retry after 8.225729ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.388575 1853270 retry.go:31] will retry after 6.939186ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.395871 1853270 retry.go:31] will retry after 12.048096ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.408118 1853270 retry.go:31] will retry after 21.338434ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
I0414 15:20:58.430420 1853270 retry.go:31] will retry after 40.895472ms: open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/scheduled-stop-820952/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820952 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820952 -n scheduled-stop-820952
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820952
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-820952 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-820952
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-820952: exit status 7 (78.197264ms)

                                                
                                                
-- stdout --
	scheduled-stop-820952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820952 -n scheduled-stop-820952
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-820952 -n scheduled-stop-820952: exit status 7 (69.174481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-820952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-820952
--- PASS: TestScheduledStopUnix (116.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (210.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.941840575 start -p running-upgrade-517744 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0414 15:23:18.359521 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.941840575 start -p running-upgrade-517744 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.378466949s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-517744 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-517744 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m17.685264263s)
helpers_test.go:175: Cleaning up "running-upgrade-517744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-517744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-517744: (1.179493561s)
--- PASS: TestRunningBinaryUpgrade (210.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (83.987579ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-508923] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (118.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-914049 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-914049 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m58.464641289s)
--- PASS: TestPause/serial/Start (118.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508923 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508923 --driver=kvm2  --container-runtime=crio: (1m41.38341841s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508923 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.306031065s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-508923 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-508923 status -o json: exit status 2 (253.230912ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-508923","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-508923
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-508923: (1.329053351s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-914049 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-914049 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.265177826s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (58.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0414 15:24:31.482357 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508923 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.651679373s)
--- PASS: TestNoKubernetes/serial/Start (33.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-036922 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-036922 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (161.174575ms)

                                                
                                                
-- stdout --
	* [false-036922] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20512
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 15:24:44.983048 1890064 out.go:345] Setting OutFile to fd 1 ...
	I0414 15:24:44.983329 1890064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:24:44.983361 1890064 out.go:358] Setting ErrFile to fd 2...
	I0414 15:24:44.983376 1890064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 15:24:44.983672 1890064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-1845971/.minikube/bin
	I0414 15:24:44.985278 1890064 out.go:352] Setting JSON to false
	I0414 15:24:44.986857 1890064 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":40029,"bootTime":1744604256,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 15:24:44.986956 1890064 start.go:139] virtualization: kvm guest
	I0414 15:24:44.988869 1890064 out.go:177] * [false-036922] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 15:24:44.990396 1890064 out.go:177]   - MINIKUBE_LOCATION=20512
	I0414 15:24:44.990424 1890064 notify.go:220] Checking for updates...
	I0414 15:24:44.992203 1890064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 15:24:44.993844 1890064 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20512-1845971/kubeconfig
	I0414 15:24:44.995254 1890064 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-1845971/.minikube
	I0414 15:24:44.998586 1890064 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 15:24:45.000077 1890064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 15:24:45.002651 1890064 config.go:182] Loaded profile config "NoKubernetes-508923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0414 15:24:45.002872 1890064 config.go:182] Loaded profile config "pause-914049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 15:24:45.003010 1890064 config.go:182] Loaded profile config "running-upgrade-517744": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0414 15:24:45.003152 1890064 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 15:24:45.058442 1890064 out.go:177] * Using the kvm2 driver based on user configuration
	I0414 15:24:45.059990 1890064 start.go:297] selected driver: kvm2
	I0414 15:24:45.060072 1890064 start.go:901] validating driver "kvm2" against <nil>
	I0414 15:24:45.060124 1890064 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 15:24:45.062830 1890064 out.go:201] 
	W0414 15:24:45.064195 1890064 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 15:24:45.065441 1890064 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-036922 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.151:8443
name: pause-914049
contexts:
- context:
cluster: pause-914049
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-914049
name: pause-914049
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-914049
user:
client-certificate: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.crt
client-key: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-036922

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036922"

                                                
                                                
----------------------- debugLogs end: false-036922 [took: 3.846236475s] --------------------------------
helpers_test.go:175: Cleaning up "false-036922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-036922
--- PASS: TestNetworkPlugins/group/false (4.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.183187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.966662507s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.769217903s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-914049 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-914049 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-914049 --output=json --layout=cluster: exit status 2 (258.67922ms)

                                                
                                                
-- stdout --
	{"Name":"pause-914049","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-914049","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-914049 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-914049 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-914049 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (12.780738007s)
--- PASS: TestPause/serial/VerifyDeletedResources (12.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (152.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2029431860 start -p stopped-upgrade-843870 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2029431860 start -p stopped-upgrade-843870 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (58.747850334s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2029431860 -p stopped-upgrade-843870 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2029431860 -p stopped-upgrade-843870 stop: (2.145505365s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-843870 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-843870 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.719326914s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (152.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-508923
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-508923: (2.542939903s)
--- PASS: TestNoKubernetes/serial/Stop (2.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-508923 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-508923 --driver=kvm2  --container-runtime=crio: (42.116124454s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-508923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-508923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.363668ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-843870
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-542791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 15:29:31.481949 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-542791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m16.065424743s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-542791 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db70a5ce-3086-42a3-bcf5-8e102369d32c] Pending
helpers_test.go:344: "busybox" [db70a5ce-3086-42a3-bcf5-8e102369d32c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [db70a5ce-3086-42a3-bcf5-8e102369d32c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004032959s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-542791 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-542791 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-542791 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-542791 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-542791 --alsologtostderr -v=3: (1m31.141894676s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-417910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-417910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m27.091584672s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (112.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-026236 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 15:30:54.570079 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-026236 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (1m52.871464464s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (112.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-542791 -n no-preload-542791
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-542791 -n no-preload-542791: exit status 7 (73.972451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-542791 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (319.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-542791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-542791 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m19.1626556s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-542791 -n no-preload-542791
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (319.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-417910 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c360284b-fd61-43a0-8ae3-b9cd3c42bd92] Pending
helpers_test.go:344: "busybox" [c360284b-fd61-43a0-8ae3-b9cd3c42bd92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c360284b-fd61-43a0-8ae3-b9cd3c42bd92] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.007452773s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-417910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-417910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-417910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050576231s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-417910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-417910 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-417910 --alsologtostderr -v=3: (1m30.939060866s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-026236 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [503765a9-23ad-43ef-830c-97d1d3b2d1ac] Pending
helpers_test.go:344: "busybox" [503765a9-23ad-43ef-830c-97d1d3b2d1ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [503765a9-23ad-43ef-830c-97d1d3b2d1ac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004344159s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-026236 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-026236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-026236 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-026236 --alsologtostderr -v=3
E0414 15:33:18.359632 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/functional-907700/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-026236 --alsologtostderr -v=3: (1m31.259191362s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-417910 -n embed-certs-417910
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-417910 -n embed-certs-417910: exit status 7 (74.054956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-417910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-417910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-417910 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m37.194451385s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-417910 -n embed-certs-417910
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236: exit status 7 (78.214698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-026236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-026236 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-026236 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (5m38.819596535s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (339.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-529869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-529869 --alsologtostderr -v=3: (3.297978891s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-529869 -n old-k8s-version-529869: exit status 7 (74.488619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-529869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zxjg5" [8e1d7f5c-f78d-42de-988d-aaa5a38cf4d4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003970871s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zxjg5" [8e1d7f5c-f78d-42de-988d-aaa5a38cf4d4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003874759s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-542791 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-542791 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-542791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-542791 -n no-preload-542791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-542791 -n no-preload-542791: exit status 2 (262.151469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-542791 -n no-preload-542791
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-542791 -n no-preload-542791: exit status 2 (272.66106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-542791 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-542791 -n no-preload-542791
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-542791 -n no-preload-542791
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-708005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-708005 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.2: (48.957407882s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-708005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-708005 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.437659481s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-708005 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-708005 --alsologtostderr -v=3: (10.392333165s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708005 -n newest-cni-708005: exit status 7 (78.951042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-708005 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m25.030244945s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4j4m5" [5597f3e0-f425-40ea-a453-3789a5dfaa55] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004478599s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4j4m5" [5597f3e0-f425-40ea-a453-3789a5dfaa55] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004455053s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-417910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-417910 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-417910 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-417910 -n embed-certs-417910
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-417910 -n embed-certs-417910: exit status 2 (285.86231ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-417910 -n embed-certs-417910
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-417910 -n embed-certs-417910: exit status 2 (266.224498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-417910 --alsologtostderr -v=1
E0414 15:39:31.482563 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/addons-885191/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-417910 -n embed-certs-417910
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-417910 -n embed-certs-417910
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0414 15:39:36.965104 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:36.971565 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:36.983077 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.004572 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.046080 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.128158 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.289772 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:37.611854 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:38.253886 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:39.536159 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:42.098061 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:39:47.219642 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.577610117s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-036922 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pcclj" [2ef47d3a-c645-494a-8a48-2ba278b55961] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
I0414 15:39:48.589181 1853270 config.go:182] Loaded profile config "auto-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pcclj" [2ef47d3a-c645-494a-8a48-2ba278b55961] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005399657s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g6j7q" [df046b50-5ae8-4e16-a811-c30593a65401] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g6j7q" [df046b50-5ae8-4e16-a811-c30593a65401] Running
E0414 15:39:57.461161 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003735683s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pcclj" [2ef47d3a-c645-494a-8a48-2ba278b55961] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005562912s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-026236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-026236 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-026236 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236: exit status 2 (292.279927ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236: exit status 2 (279.757306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-026236 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-026236 --alsologtostderr -v=1: (1.10446059s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-026236 -n default-k8s-diff-port-026236
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.870448705s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.775768147s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rdgr8" [7b602a1a-5704-4657-8948-b965b8aaad98] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.01048583s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-036922 "pgrep -a kubelet"
I0414 15:40:47.882999 1853270 config.go:182] Loaded profile config "kindnet-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-n4sxd" [fdf91bea-d336-40c2-bf9c-8b3467f9b5c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-n4sxd" [fdf91bea-d336-40c2-bf9c-8b3467f9b5c9] Running
E0414 15:40:58.904311 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005007568s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.095223771s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bkc4g" [c05fb8a5-41ce-42bc-b984-97a4f108ccf3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003409077s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-036922 "pgrep -a kubelet"
I0414 15:41:40.218763 1853270 config.go:182] Loaded profile config "calico-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2jj6q" [66c1aa3c-4ade-4551-9bdc-fcb418c28f63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2jj6q" [66c1aa3c-4ade-4551-9bdc-fcb418c28f63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00485298s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-036922 "pgrep -a kubelet"
I0414 15:41:49.184685 1853270 config.go:182] Loaded profile config "custom-flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-036922 replace --force -f testdata/netcat-deployment.yaml: (1.01757985s)
I0414 15:41:50.284230 1853270 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8lmgz" [e9a778ea-46cc-4ed1-b60d-780730a0d7fd] Pending
helpers_test.go:344: "netcat-5d86dc444-8lmgz" [e9a778ea-46cc-4ed1-b60d-780730a0d7fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8lmgz" [e9a778ea-46cc-4ed1-b60d-780730a0d7fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006146172s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.557549244s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0414 15:42:20.825904 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/no-preload-542791/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.383184 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.389610 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.401070 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.422548 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.464010 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.545593 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:26.707209 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:27.029132 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:27.671508 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:28.952939 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:31.515058 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
E0414 15:42:36.636712 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-036922 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.972561762s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-036922 "pgrep -a kubelet"
I0414 15:42:46.765049 1853270 config.go:182] Loaded profile config "enable-default-cni-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-036922 replace --force -f testdata/netcat-deployment.yaml
E0414 15:42:46.878666 1853270 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/default-k8s-diff-port-026236/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xk59z" [7f2eac45-4fb5-4608-8ce9-2384ea638480] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xk59z" [7f2eac45-4fb5-4608-8ce9-2384ea638480] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00495054s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qc66f" [04239873-2b1c-4b13-98c1-c10b5bd4de9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003663007s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-036922 "pgrep -a kubelet"
I0414 15:43:24.857390 1853270 config.go:182] Loaded profile config "flannel-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ng5x6" [08aa5958-d11f-49c4-bcd6-d2cc657a87e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ng5x6" [08aa5958-d11f-49c4-bcd6-d2cc657a87e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004901718s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-036922 "pgrep -a kubelet"
I0414 15:43:57.528566 1853270 config.go:182] Loaded profile config "bridge-036922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-036922 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2s6bd" [85beb135-ae17-4516-8706-bb63f7446c19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2s6bd" [85beb135-ae17-4516-8706-bb63f7446c19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00447442s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-036922 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-036922 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/327)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.17
270 TestNetworkPlugins/group/kubenet 5.89
278 TestNetworkPlugins/group/cilium 4.01
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-885191 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-716567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-716567
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-036922 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.151:8443
name: pause-914049
contexts:
- context:
cluster: pause-914049
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-914049
name: pause-914049
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-914049
user:
client-certificate: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.crt
client-key: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-036922

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036922"

                                                
                                                
----------------------- debugLogs end: kubenet-036922 [took: 5.676085913s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-036922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-036922
--- SKIP: TestNetworkPlugins/group/kubenet (5.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-036922 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-036922" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.151:8443
name: pause-914049
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20512-1845971/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:24:49 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.235:8443
name: running-upgrade-517744
contexts:
- context:
cluster: pause-914049
extensions:
- extension:
last-update: Mon, 14 Apr 2025 15:23:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-914049
name: pause-914049
- context:
cluster: running-upgrade-517744
user: running-upgrade-517744
name: running-upgrade-517744
current-context: running-upgrade-517744
kind: Config
preferences: {}
users:
- name: pause-914049
user:
client-certificate: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.crt
client-key: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/pause-914049/client.key
- name: running-upgrade-517744
user:
client-certificate: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/running-upgrade-517744/client.crt
client-key: /home/jenkins/minikube-integration/20512-1845971/.minikube/profiles/running-upgrade-517744/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-036922

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-036922" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036922"

                                                
                                                
----------------------- debugLogs end: cilium-036922 [took: 3.827526675s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-036922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-036922
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
Copied to clipboard